2023-07-12 10:57:49,892 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356 2023-07-12 10:57:49,912 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 10:57:49,932 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 10:57:49,933 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719, deleteOnExit=true 2023-07-12 10:57:49,933 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 10:57:49,934 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/test.cache.data in system properties and HBase conf 2023-07-12 10:57:49,934 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 10:57:49,934 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir in system properties and HBase conf 2023-07-12 10:57:49,935 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 10:57:49,936 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 10:57:49,936 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 10:57:50,080 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 10:57:50,532 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 10:57:50,536 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:57:50,537 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:57:50,537 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 10:57:50,538 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:57:50,538 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 10:57:50,538 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 10:57:50,538 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:57:50,539 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:57:50,539 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 10:57:50,540 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/nfs.dump.dir in system properties and HBase conf 2023-07-12 10:57:50,540 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir in system properties and HBase conf 2023-07-12 10:57:50,540 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:57:50,541 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 10:57:50,541 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 10:57:51,093 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:57:51,097 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:57:51,420 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 10:57:51,590 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 10:57:51,609 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:51,649 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:51,690 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/Jetty_localhost_36451_hdfs____.vro6o9/webapp 2023-07-12 10:57:51,861 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36451 2023-07-12 10:57:51,872 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:57:51,873 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:57:52,336 WARN [Listener at localhost/36583] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:57:52,455 WARN [Listener at localhost/36583] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:57:52,477 WARN [Listener at localhost/36583] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:52,485 INFO [Listener at localhost/36583] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:52,491 INFO [Listener at localhost/36583] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/Jetty_localhost_35367_datanode____a8iq2h/webapp 2023-07-12 10:57:52,624 INFO [Listener at localhost/36583] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35367 2023-07-12 10:57:52,995 WARN [Listener at localhost/45053] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:57:53,005 WARN [Listener at localhost/45053] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:57:53,008 WARN [Listener at localhost/45053] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:53,009 INFO [Listener at localhost/45053] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:53,017 INFO [Listener at localhost/45053] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/Jetty_localhost_38581_datanode____.nv8my8/webapp 2023-07-12 10:57:53,115 INFO [Listener at localhost/45053] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38581 2023-07-12 10:57:53,125 WARN [Listener at localhost/38323] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:57:53,134 WARN [Listener at localhost/38323] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:57:53,137 WARN [Listener at localhost/38323] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:57:53,139 INFO [Listener at localhost/38323] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:57:53,145 INFO [Listener at localhost/38323] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/Jetty_localhost_44655_datanode____.84zdj7/webapp 2023-07-12 10:57:53,254 INFO [Listener at localhost/38323] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44655 2023-07-12 10:57:53,266 WARN [Listener at localhost/35677] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:57:53,553 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd015b1dcbd070d12: Processing first storage report for DS-41b51c19-ddad-49ba-bf72-db3ca83d1657 from datanode 56647dd7-3fc8-4faf-8ac2-c333fa567928 2023-07-12 10:57:53,554 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd015b1dcbd070d12: from storage DS-41b51c19-ddad-49ba-bf72-db3ca83d1657 node DatanodeRegistration(127.0.0.1:37045, datanodeUuid=56647dd7-3fc8-4faf-8ac2-c333fa567928, infoPort=40041, infoSecurePort=0, ipcPort=45053, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4d8983067cabb6ac: Processing first storage report for DS-3ff1b66d-edec-4e67-b225-1d5d86044781 from datanode c3652c40-4af8-4815-817e-8c41a9db1a48 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4d8983067cabb6ac: from storage DS-3ff1b66d-edec-4e67-b225-1d5d86044781 node DatanodeRegistration(127.0.0.1:35873, datanodeUuid=c3652c40-4af8-4815-817e-8c41a9db1a48, infoPort=34329, infoSecurePort=0, ipcPort=38323, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc948b5f5105fb129: Processing first storage report for DS-887fe223-a673-4cfb-b1ce-68110d962dc9 from datanode 9c477974-4950-45df-93d6-941a878dce48 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc948b5f5105fb129: from storage DS-887fe223-a673-4cfb-b1ce-68110d962dc9 node DatanodeRegistration(127.0.0.1:37303, datanodeUuid=9c477974-4950-45df-93d6-941a878dce48, infoPort=43655, infoSecurePort=0, ipcPort=35677, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd015b1dcbd070d12: Processing first storage report for DS-ffcc6674-316a-4ae1-9bdb-a651e057ee57 from datanode 56647dd7-3fc8-4faf-8ac2-c333fa567928 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd015b1dcbd070d12: from storage DS-ffcc6674-316a-4ae1-9bdb-a651e057ee57 node DatanodeRegistration(127.0.0.1:37045, datanodeUuid=56647dd7-3fc8-4faf-8ac2-c333fa567928, infoPort=40041, infoSecurePort=0, ipcPort=45053, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4d8983067cabb6ac: Processing first storage report for DS-08f2bdef-ae15-4f3a-80b9-a33671dd919d from datanode c3652c40-4af8-4815-817e-8c41a9db1a48 2023-07-12 10:57:53,556 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4d8983067cabb6ac: from storage DS-08f2bdef-ae15-4f3a-80b9-a33671dd919d node DatanodeRegistration(127.0.0.1:35873, datanodeUuid=c3652c40-4af8-4815-817e-8c41a9db1a48, infoPort=34329, infoSecurePort=0, ipcPort=38323, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,556 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc948b5f5105fb129: Processing first storage report for DS-a9fa6dc1-3e9b-4414-8674-5f63862578be from datanode 9c477974-4950-45df-93d6-941a878dce48 2023-07-12 10:57:53,556 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc948b5f5105fb129: from storage DS-a9fa6dc1-3e9b-4414-8674-5f63862578be node DatanodeRegistration(127.0.0.1:37303, datanodeUuid=9c477974-4950-45df-93d6-941a878dce48, infoPort=43655, infoSecurePort=0, ipcPort=35677, storageInfo=lv=-57;cid=testClusterID;nsid=1224332960;c=1689159471177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:57:53,715 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356 2023-07-12 10:57:53,784 INFO [Listener at localhost/35677] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/zookeeper_0, clientPort=52490, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 10:57:53,798 INFO [Listener at localhost/35677] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52490 2023-07-12 10:57:53,806 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:53,808 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:54,484 INFO [Listener at localhost/35677] util.FSUtils(471): Created version file at hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 with version=8 2023-07-12 10:57:54,484 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/hbase-staging 2023-07-12 10:57:54,492 DEBUG [Listener at localhost/35677] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 10:57:54,492 DEBUG [Listener at localhost/35677] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 10:57:54,492 DEBUG [Listener at localhost/35677] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 10:57:54,493 DEBUG [Listener at localhost/35677] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 10:57:54,875 INFO [Listener at localhost/35677] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 10:57:55,454 INFO [Listener at localhost/35677] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:57:55,493 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:55,493 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:55,493 INFO [Listener at localhost/35677] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:57:55,494 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:55,494 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:57:55,676 INFO [Listener at localhost/35677] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:57:55,765 DEBUG [Listener at localhost/35677] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 10:57:55,881 INFO [Listener at localhost/35677] ipc.NettyRpcServer(120): Bind to /172.31.2.10:38969 2023-07-12 10:57:55,898 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:55,900 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:55,923 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38969 connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:57:55,965 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:389690x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:57:55,968 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38969-0x1015920de080000 connected 2023-07-12 10:57:55,996 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:57:55,997 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:57:56,002 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:57:56,014 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38969 2023-07-12 10:57:56,015 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38969 2023-07-12 10:57:56,015 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38969 2023-07-12 10:57:56,016 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38969 2023-07-12 10:57:56,016 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38969 2023-07-12 10:57:56,050 INFO [Listener at localhost/35677] log.Log(170): Logging initialized @6949ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 10:57:56,179 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:57:56,180 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:57:56,180 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:57:56,182 INFO [Listener at localhost/35677] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:57:56,182 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:57:56,182 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:57:56,186 INFO [Listener at localhost/35677] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:57:56,243 INFO [Listener at localhost/35677] http.HttpServer(1146): Jetty bound to port 34865 2023-07-12 10:57:56,244 INFO [Listener at localhost/35677] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:57:56,275 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,278 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@563e1db6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:57:56,279 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,279 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b463d55{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:57:56,460 INFO [Listener at localhost/35677] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:57:56,472 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:57:56,473 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:57:56,475 INFO [Listener at localhost/35677] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:57:56,483 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,511 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66df7ad1{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/jetty-0_0_0_0-34865-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6662942566221396145/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:57:56,522 INFO [Listener at localhost/35677] server.AbstractConnector(333): Started ServerConnector@631e341c{HTTP/1.1, (http/1.1)}{0.0.0.0:34865} 2023-07-12 10:57:56,523 INFO [Listener at localhost/35677] server.Server(415): Started @7421ms 2023-07-12 10:57:56,526 INFO [Listener at localhost/35677] master.HMaster(444): hbase.rootdir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4, hbase.cluster.distributed=false 2023-07-12 10:57:56,605 INFO [Listener at localhost/35677] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:57:56,606 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,606 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,606 INFO [Listener at localhost/35677] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:57:56,606 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,607 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:57:56,613 INFO [Listener at localhost/35677] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:57:56,616 INFO [Listener at localhost/35677] ipc.NettyRpcServer(120): Bind to /172.31.2.10:46327 2023-07-12 10:57:56,619 INFO [Listener at localhost/35677] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:57:56,627 DEBUG [Listener at localhost/35677] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:57:56,628 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:56,630 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:56,632 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46327 connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:57:56,640 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:463270x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:57:56,641 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:463270x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:57:56,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46327-0x1015920de080001 connected 2023-07-12 10:57:56,645 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:57:56,647 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:57:56,650 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-12 10:57:56,657 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46327 2023-07-12 10:57:56,658 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46327 2023-07-12 10:57:56,673 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-12 10:57:56,673 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-12 10:57:56,677 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:57:56,677 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:57:56,677 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:57:56,678 INFO [Listener at localhost/35677] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:57:56,679 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:57:56,679 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:57:56,679 INFO [Listener at localhost/35677] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:57:56,682 INFO [Listener at localhost/35677] http.HttpServer(1146): Jetty bound to port 35879 2023-07-12 10:57:56,682 INFO [Listener at localhost/35677] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:57:56,688 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,688 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4f598a02{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:57:56,689 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,689 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b8a2300{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:57:56,824 INFO [Listener at localhost/35677] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:57:56,826 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:57:56,827 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:57:56,827 INFO [Listener at localhost/35677] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:57:56,828 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,832 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5cdf3cad{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/jetty-0_0_0_0-35879-hbase-server-2_4_18-SNAPSHOT_jar-_-any-422488789070614418/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:57:56,833 INFO [Listener at localhost/35677] server.AbstractConnector(333): Started ServerConnector@5c80b18{HTTP/1.1, (http/1.1)}{0.0.0.0:35879} 2023-07-12 10:57:56,833 INFO [Listener at localhost/35677] server.Server(415): Started @7732ms 2023-07-12 10:57:56,846 INFO [Listener at localhost/35677] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:57:56,846 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,846 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,847 INFO [Listener at localhost/35677] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:57:56,847 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:56,847 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:57:56,847 INFO [Listener at localhost/35677] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:57:56,849 INFO [Listener at localhost/35677] ipc.NettyRpcServer(120): Bind to /172.31.2.10:37587 2023-07-12 10:57:56,849 INFO [Listener at localhost/35677] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:57:56,854 DEBUG [Listener at localhost/35677] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:57:56,855 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:56,857 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:56,859 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37587 connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:57:56,864 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:375870x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:57:56,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37587-0x1015920de080002 connected 2023-07-12 10:57:56,865 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:57:56,866 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:57:56,867 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:57:56,868 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37587 2023-07-12 10:57:56,868 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37587 2023-07-12 10:57:56,869 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37587 2023-07-12 10:57:56,871 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37587 2023-07-12 10:57:56,871 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37587 2023-07-12 10:57:56,875 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:57:56,875 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:57:56,875 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:57:56,876 INFO [Listener at localhost/35677] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:57:56,876 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:57:56,876 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:57:56,876 INFO [Listener at localhost/35677] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:57:56,877 INFO [Listener at localhost/35677] http.HttpServer(1146): Jetty bound to port 36131 2023-07-12 10:57:56,877 INFO [Listener at localhost/35677] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:57:56,885 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,886 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19e0a4a6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:57:56,886 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:56,886 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ba1cd62{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:57:57,026 INFO [Listener at localhost/35677] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:57:57,028 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:57:57,028 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:57:57,028 INFO [Listener at localhost/35677] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:57:57,030 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:57,031 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1ffcc362{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/jetty-0_0_0_0-36131-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6287269684072544386/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:57:57,032 INFO [Listener at localhost/35677] server.AbstractConnector(333): Started ServerConnector@46770358{HTTP/1.1, (http/1.1)}{0.0.0.0:36131} 2023-07-12 10:57:57,032 INFO [Listener at localhost/35677] server.Server(415): Started @7931ms 2023-07-12 10:57:57,047 INFO [Listener at localhost/35677] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:57:57,047 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:57,047 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:57,047 INFO [Listener at localhost/35677] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:57:57,048 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:57:57,048 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:57:57,048 INFO [Listener at localhost/35677] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:57:57,050 INFO [Listener at localhost/35677] ipc.NettyRpcServer(120): Bind to /172.31.2.10:39937 2023-07-12 10:57:57,050 INFO [Listener at localhost/35677] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:57:57,052 DEBUG [Listener at localhost/35677] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:57:57,053 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:57,055 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:57,057 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39937 connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:57:57,061 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:399370x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:57:57,062 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:399370x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:57:57,063 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:399370x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:57:57,064 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:399370x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:57:57,070 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39937-0x1015920de080003 connected 2023-07-12 10:57:57,070 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39937 2023-07-12 10:57:57,070 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39937 2023-07-12 10:57:57,071 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39937 2023-07-12 10:57:57,072 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39937 2023-07-12 10:57:57,073 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39937 2023-07-12 10:57:57,076 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:57:57,077 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:57:57,077 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:57:57,077 INFO [Listener at localhost/35677] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:57:57,078 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:57:57,078 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:57:57,078 INFO [Listener at localhost/35677] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:57:57,079 INFO [Listener at localhost/35677] http.HttpServer(1146): Jetty bound to port 37609 2023-07-12 10:57:57,079 INFO [Listener at localhost/35677] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:57:57,083 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:57,083 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c0d721c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:57:57,084 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:57,084 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@933d05e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:57:57,212 INFO [Listener at localhost/35677] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:57:57,214 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:57:57,214 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:57:57,215 INFO [Listener at localhost/35677] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:57:57,216 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:57:57,218 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7a6735a9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/jetty-0_0_0_0-37609-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4740356045208293943/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:57:57,219 INFO [Listener at localhost/35677] server.AbstractConnector(333): Started ServerConnector@5df4bbff{HTTP/1.1, (http/1.1)}{0.0.0.0:37609} 2023-07-12 10:57:57,220 INFO [Listener at localhost/35677] server.Server(415): Started @8118ms 2023-07-12 10:57:57,227 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:57:57,233 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@59729c1c{HTTP/1.1, (http/1.1)}{0.0.0.0:40353} 2023-07-12 10:57:57,233 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @8132ms 2023-07-12 10:57:57,233 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:57,245 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:57:57,246 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:57,275 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:57:57,275 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:57:57,275 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:57:57,275 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:57,275 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:57:57,278 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:57:57,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:57:57,280 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,38969,1689159474659 from backup master directory 2023-07-12 10:57:57,285 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:57,285 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:57:57,286 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:57:57,286 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:57,291 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 10:57:57,293 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 10:57:57,400 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/hbase.id with ID: 56cb56be-be6f-4b5a-b417-f46c46fa5458 2023-07-12 10:57:57,439 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:57:57,455 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:57,516 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x532248e8 to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:57:57,542 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fcc4dba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:57:57,569 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:57:57,571 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:57:57,591 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 10:57:57,591 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 10:57:57,593 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:57:57,597 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:57:57,598 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:57:57,640 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store-tmp 2023-07-12 10:57:57,728 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:57:57,728 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:57:57,728 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:57:57,729 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:57:57,729 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:57:57,729 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:57:57,729 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:57:57,729 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:57:57,732 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/WALs/jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:57,763 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C38969%2C1689159474659, suffix=, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/WALs/jenkins-hbase9.apache.org,38969,1689159474659, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/oldWALs, maxLogs=10 2023-07-12 10:57:57,845 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:57:57,845 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:57:57,845 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:57:57,865 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 10:57:57,965 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/WALs/jenkins-hbase9.apache.org,38969,1689159474659/jenkins-hbase9.apache.org%2C38969%2C1689159474659.1689159477777 2023-07-12 10:57:57,970 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK], DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK], DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK]] 2023-07-12 10:57:57,972 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:57:57,972 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:57:57,978 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:57,979 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:58,072 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:58,086 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:57:58,128 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:57:58,143 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:58,153 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:58,155 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:58,183 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:57:58,191 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:57:58,192 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10620765600, jitterRate=-0.010864123702049255}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:57:58,192 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:57:58,193 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:57:58,218 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:57:58,219 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:57:58,222 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:57:58,224 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 10:57:58,280 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 55 msec 2023-07-12 10:57:58,280 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:57:58,315 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 10:57:58,322 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:57:58,331 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 10:57:58,337 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:57:58,341 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:57:58,345 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:58,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:57:58,346 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:57:58,362 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:57:58,367 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:57:58,368 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:57:58,368 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:58,368 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:57:58,368 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,38969,1689159474659, sessionid=0x1015920de080000, setting cluster-up flag (Was=false) 2023-07-12 10:57:58,369 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:57:58,389 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:58,396 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:57:58,398 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:58,405 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:58,411 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:57:58,412 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:57:58,416 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.hbase-snapshot/.tmp 2023-07-12 10:57:58,425 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(951): ClusterId : 56cb56be-be6f-4b5a-b417-f46c46fa5458 2023-07-12 10:57:58,425 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(951): ClusterId : 56cb56be-be6f-4b5a-b417-f46c46fa5458 2023-07-12 10:57:58,425 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(951): ClusterId : 56cb56be-be6f-4b5a-b417-f46c46fa5458 2023-07-12 10:57:58,432 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:57:58,432 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:57:58,432 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:57:58,440 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:57:58,440 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:57:58,440 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:57:58,440 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:57:58,440 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:57:58,440 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:57:58,444 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:57:58,444 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:57:58,444 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:57:58,451 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ReadOnlyZKClient(139): Connect 0x780006b3 to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:57:58,451 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ReadOnlyZKClient(139): Connect 0x079af665 to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:57:58,451 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ReadOnlyZKClient(139): Connect 0x5905e33e to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:57:58,465 DEBUG [RS:0;jenkins-hbase9:46327] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@523d247c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:57:58,466 DEBUG [RS:0;jenkins-hbase9:46327] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1318a29d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:57:58,466 DEBUG [RS:2;jenkins-hbase9:39937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@429868f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:57:58,466 DEBUG [RS:2;jenkins-hbase9:39937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b0bcf81, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:57:58,466 DEBUG [RS:1;jenkins-hbase9:37587] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b88c642, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:57:58,467 DEBUG [RS:1;jenkins-hbase9:37587] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f12b11e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:57:58,501 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:46327 2023-07-12 10:57:58,505 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:39937 2023-07-12 10:57:58,505 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:37587 2023-07-12 10:57:58,509 INFO [RS:0;jenkins-hbase9:46327] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:57:58,509 INFO [RS:2;jenkins-hbase9:39937] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:57:58,511 INFO [RS:0;jenkins-hbase9:46327] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:57:58,509 INFO [RS:1;jenkins-hbase9:37587] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:57:58,512 INFO [RS:1;jenkins-hbase9:37587] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:57:58,511 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:57:58,511 INFO [RS:2;jenkins-hbase9:39937] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:57:58,512 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:57:58,512 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:57:58,516 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:39937, startcode=1689159477046 2023-07-12 10:57:58,516 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:37587, startcode=1689159476845 2023-07-12 10:57:58,516 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:46327, startcode=1689159476604 2023-07-12 10:57:58,530 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:57:58,542 DEBUG [RS:2;jenkins-hbase9:39937] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:57:58,544 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:57:58,544 DEBUG [RS:0;jenkins-hbase9:46327] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:57:58,544 DEBUG [RS:1;jenkins-hbase9:37587] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:57:58,547 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:57:58,550 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:57:58,550 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:57:58,617 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40835, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:57:58,617 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55615, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:57:58,617 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:49271, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:57:58,634 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:57:58,645 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:57:58,647 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:57:58,659 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 10:57:58,675 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:57:58,675 WARN [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:57:58,675 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:57:58,675 WARN [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:57:58,675 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 10:57:58,675 WARN [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 10:57:58,705 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:57:58,710 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:57:58,711 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:57:58,711 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:57:58,712 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:57:58,713 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,723 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159508723 2023-07-12 10:57:58,725 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:57:58,729 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:57:58,729 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:57:58,730 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 10:57:58,732 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:57:58,737 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:57:58,738 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:57:58,738 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:57:58,738 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:57:58,740 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,742 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:57:58,744 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:57:58,744 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:57:58,750 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:57:58,750 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:57:58,753 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159478752,5,FailOnTimeoutGroup] 2023-07-12 10:57:58,753 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159478753,5,FailOnTimeoutGroup] 2023-07-12 10:57:58,753 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,753 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:57:58,755 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,755 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,776 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:46327, startcode=1689159476604 2023-07-12 10:57:58,776 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:39937, startcode=1689159477046 2023-07-12 10:57:58,777 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:37587, startcode=1689159476845 2023-07-12 10:57:58,783 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,785 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:57:58,788 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:57:58,794 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,794 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:57:58,794 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 10:57:58,795 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,796 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 2023-07-12 10:57:58,796 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:57:58,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:57:58,797 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36583 2023-07-12 10:57:58,798 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34865 2023-07-12 10:57:58,802 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 2023-07-12 10:57:58,802 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36583 2023-07-12 10:57:58,802 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34865 2023-07-12 10:57:58,804 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 2023-07-12 10:57:58,804 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36583 2023-07-12 10:57:58,804 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34865 2023-07-12 10:57:58,812 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:57:58,815 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,815 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,815 WARN [RS:2;jenkins-hbase9:39937] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:57:58,815 INFO [RS:2;jenkins-hbase9:39937] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:57:58,815 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,816 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,815 WARN [RS:1;jenkins-hbase9:37587] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:57:58,816 WARN [RS:0;jenkins-hbase9:46327] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:57:58,821 INFO [RS:1;jenkins-hbase9:37587] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:57:58,822 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,821 INFO [RS:0;jenkins-hbase9:46327] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:57:58,823 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,827 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,37587,1689159476845] 2023-07-12 10:57:58,828 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,39937,1689159477046] 2023-07-12 10:57:58,828 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,46327,1689159476604] 2023-07-12 10:57:58,841 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:57:58,842 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:57:58,842 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 2023-07-12 10:57:58,846 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,846 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,846 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,846 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:58,849 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,849 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,849 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:58,850 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,851 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:58,876 DEBUG [RS:1;jenkins-hbase9:37587] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:57:58,876 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:57:58,876 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:57:58,878 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:57:58,881 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:57:58,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/info 2023-07-12 10:57:58,885 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:57:58,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:58,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:57:58,888 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:57:58,889 INFO [RS:0;jenkins-hbase9:46327] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:57:58,889 INFO [RS:2;jenkins-hbase9:39937] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:57:58,889 INFO [RS:1;jenkins-hbase9:37587] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:57:58,890 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:57:58,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:58,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:57:58,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/table 2023-07-12 10:57:58,903 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:57:58,904 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:58,906 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740 2023-07-12 10:57:58,907 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740 2023-07-12 10:57:58,913 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:57:58,918 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:57:58,919 INFO [RS:1;jenkins-hbase9:37587] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:57:58,919 INFO [RS:2;jenkins-hbase9:39937] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:57:58,925 INFO [RS:0;jenkins-hbase9:46327] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:57:58,934 INFO [RS:1;jenkins-hbase9:37587] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:57:58,934 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,944 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:57:58,946 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:57:58,942 INFO [RS:0;jenkins-hbase9:46327] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:57:58,948 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10919819520, jitterRate=0.01698744297027588}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:57:58,948 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,948 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:57:58,948 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:57:58,948 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:57:58,948 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:57:58,948 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:57:58,948 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:57:58,942 INFO [RS:2;jenkins-hbase9:39937] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:57:58,949 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:57:58,950 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,950 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:57:58,950 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:57:58,950 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:57:58,956 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:57:58,956 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 10:57:58,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:57:58,971 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,971 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,971 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,971 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,972 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,972 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:57:58,973 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:57:58,974 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:0;jenkins-hbase9:46327] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:2;jenkins-hbase9:39937] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,973 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,974 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,975 DEBUG [RS:1;jenkins-hbase9:37587] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:57:58,989 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,990 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:58,990 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,002 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,004 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,004 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:57:59,004 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,005 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,005 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,005 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,008 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 10:57:59,022 INFO [RS:0;jenkins-hbase9:46327] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:57:59,026 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46327,1689159476604-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,029 INFO [RS:1;jenkins-hbase9:37587] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:57:59,029 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37587,1689159476845-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,031 INFO [RS:2;jenkins-hbase9:39937] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:57:59,031 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,39937,1689159477046-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,050 INFO [RS:0;jenkins-hbase9:46327] regionserver.Replication(203): jenkins-hbase9.apache.org,46327,1689159476604 started 2023-07-12 10:57:59,051 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,46327,1689159476604, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:46327, sessionid=0x1015920de080001 2023-07-12 10:57:59,053 INFO [RS:1;jenkins-hbase9:37587] regionserver.Replication(203): jenkins-hbase9.apache.org,37587,1689159476845 started 2023-07-12 10:57:59,054 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:57:59,054 DEBUG [RS:0;jenkins-hbase9:46327] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:59,054 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,46327,1689159476604' 2023-07-12 10:57:59,054 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:57:59,054 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,37587,1689159476845, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:37587, sessionid=0x1015920de080002 2023-07-12 10:57:59,054 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:57:59,054 DEBUG [RS:1;jenkins-hbase9:37587] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:59,054 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,37587,1689159476845' 2023-07-12 10:57:59,055 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:57:59,055 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:57:59,056 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:57:59,056 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:57:59,056 DEBUG [RS:0;jenkins-hbase9:46327] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:59,057 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,46327,1689159476604' 2023-07-12 10:57:59,057 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:57:59,057 DEBUG [RS:0;jenkins-hbase9:46327] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:57:59,058 DEBUG [RS:0;jenkins-hbase9:46327] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:57:59,058 INFO [RS:0;jenkins-hbase9:46327] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:57:59,058 INFO [RS:0;jenkins-hbase9:46327] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:57:59,061 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:57:59,062 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:57:59,062 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:57:59,062 DEBUG [RS:1;jenkins-hbase9:37587] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:57:59,063 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,37587,1689159476845' 2023-07-12 10:57:59,063 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:57:59,063 INFO [RS:2;jenkins-hbase9:39937] regionserver.Replication(203): jenkins-hbase9.apache.org,39937,1689159477046 started 2023-07-12 10:57:59,063 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,39937,1689159477046, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:39937, sessionid=0x1015920de080003 2023-07-12 10:57:59,063 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:57:59,063 DEBUG [RS:2;jenkins-hbase9:39937] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:59,063 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39937,1689159477046' 2023-07-12 10:57:59,063 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:57:59,063 DEBUG [RS:1;jenkins-hbase9:37587] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:57:59,065 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:57:59,065 DEBUG [RS:1;jenkins-hbase9:37587] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:57:59,066 INFO [RS:1;jenkins-hbase9:37587] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:57:59,066 INFO [RS:1;jenkins-hbase9:37587] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39937,1689159477046' 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:57:59,066 DEBUG [RS:2;jenkins-hbase9:39937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:57:59,067 DEBUG [RS:2;jenkins-hbase9:39937] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:57:59,067 INFO [RS:2;jenkins-hbase9:39937] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:57:59,067 INFO [RS:2;jenkins-hbase9:39937] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:57:59,169 DEBUG [jenkins-hbase9:38969] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:57:59,174 INFO [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C46327%2C1689159476604, suffix=, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,46327,1689159476604, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs, maxLogs=32 2023-07-12 10:57:59,174 INFO [RS:2;jenkins-hbase9:39937] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C39937%2C1689159477046, suffix=, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,39937,1689159477046, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs, maxLogs=32 2023-07-12 10:57:59,174 INFO [RS:1;jenkins-hbase9:37587] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C37587%2C1689159476845, suffix=, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,37587,1689159476845, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs, maxLogs=32 2023-07-12 10:57:59,192 DEBUG [jenkins-hbase9:38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:57:59,194 DEBUG [jenkins-hbase9:38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:57:59,194 DEBUG [jenkins-hbase9:38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:57:59,194 DEBUG [jenkins-hbase9:38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:57:59,194 DEBUG [jenkins-hbase9:38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:57:59,207 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,46327,1689159476604, state=OPENING 2023-07-12 10:57:59,213 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:57:59,213 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:57:59,220 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:57:59,220 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:57:59,223 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:57:59,224 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:57:59,227 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 10:57:59,228 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:57:59,229 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:57:59,232 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:57:59,235 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:57:59,235 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:57:59,235 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:57:59,244 INFO [RS:1;jenkins-hbase9:37587] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,37587,1689159476845/jenkins-hbase9.apache.org%2C37587%2C1689159476845.1689159479179 2023-07-12 10:57:59,245 DEBUG [RS:1;jenkins-hbase9:37587] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK], DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK], DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK]] 2023-07-12 10:57:59,250 INFO [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,46327,1689159476604/jenkins-hbase9.apache.org%2C46327%2C1689159476604.1689159479180 2023-07-12 10:57:59,250 INFO [RS:2;jenkins-hbase9:39937] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,39937,1689159477046/jenkins-hbase9.apache.org%2C39937%2C1689159477046.1689159479180 2023-07-12 10:57:59,251 DEBUG [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK], DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK], DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK]] 2023-07-12 10:57:59,252 DEBUG [RS:2;jenkins-hbase9:39937] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK], DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK], DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK]] 2023-07-12 10:57:59,316 WARN [ReadOnlyZKClient-127.0.0.1:52490@0x532248e8] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:57:59,344 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:57:59,349 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:57:59,350 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46327] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:55446 deadline: 1689159539349, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:59,425 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:57:59,430 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:57:59,439 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55456, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:57:59,462 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:57:59,462 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:57:59,469 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C46327%2C1689159476604.meta, suffix=.meta, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,46327,1689159476604, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs, maxLogs=32 2023-07-12 10:57:59,510 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:57:59,511 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:57:59,511 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:57:59,529 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,46327,1689159476604/jenkins-hbase9.apache.org%2C46327%2C1689159476604.meta.1689159479471.meta 2023-07-12 10:57:59,530 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK], DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK], DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK]] 2023-07-12 10:57:59,531 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:57:59,533 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:57:59,537 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:57:59,540 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:57:59,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:57:59,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:57:59,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:57:59,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:57:59,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:57:59,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/info 2023-07-12 10:57:59,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/info 2023-07-12 10:57:59,560 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:57:59,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:59,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:57:59,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:57:59,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:57:59,564 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:57:59,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:59,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:57:59,567 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/table 2023-07-12 10:57:59,567 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/table 2023-07-12 10:57:59,567 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:57:59,568 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:57:59,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740 2023-07-12 10:57:59,578 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740 2023-07-12 10:57:59,589 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:57:59,597 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:57:59,600 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9810859520, jitterRate=-0.08629250526428223}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:57:59,601 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:57:59,616 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689159479420 2023-07-12 10:57:59,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:57:59,646 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:57:59,647 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,46327,1689159476604, state=OPEN 2023-07-12 10:57:59,651 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:57:59,651 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:57:59,656 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 10:57:59,656 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,46327,1689159476604 in 417 msec 2023-07-12 10:57:59,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 10:57:59,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 691 msec 2023-07-12 10:57:59,671 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1110 sec 2023-07-12 10:57:59,671 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159479671, completionTime=-1 2023-07-12 10:57:59,672 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 10:57:59,672 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:57:59,746 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:57:59,746 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159539746 2023-07-12 10:57:59,746 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159599746 2023-07-12 10:57:59,746 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 74 msec 2023-07-12 10:57:59,763 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38969,1689159474659-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,763 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38969,1689159474659-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,763 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38969,1689159474659-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,765 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:38969, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,766 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:57:59,781 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 10:57:59,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 10:57:59,789 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:57:59,804 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 10:57:59,807 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:57:59,811 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:57:59,831 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:57:59,835 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1 empty. 2023-07-12 10:57:59,836 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:57:59,836 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 10:57:59,869 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:57:59,895 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:57:59,904 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:57:59,919 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:57:59,922 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 10:57:59,925 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 078a255e8fc6041e4f36118ed48c04a1, NAME => 'hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:57:59,930 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:57:59,930 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 empty. 2023-07-12 10:57:59,931 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:57:59,931 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 10:57:59,984 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:57:59,985 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 078a255e8fc6041e4f36118ed48c04a1, disabling compactions & flushes 2023-07-12 10:57:59,985 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:57:59,985 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:57:59,985 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. after waiting 0 ms 2023-07-12 10:57:59,985 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:57:59,985 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:57:59,985 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 078a255e8fc6041e4f36118ed48c04a1: 2023-07-12 10:57:59,990 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 10:57:59,992 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 15c37347446d32713d4f7b66d1f2d533, NAME => 'hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:57:59,993 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:00,020 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159479997"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159479997"}]},"ts":"1689159479997"} 2023-07-12 10:58:00,025 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:00,026 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 15c37347446d32713d4f7b66d1f2d533, disabling compactions & flushes 2023-07-12 10:58:00,026 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,027 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,027 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. after waiting 0 ms 2023-07-12 10:58:00,027 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,027 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,027 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:00,031 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:00,032 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159480032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159480032"}]},"ts":"1689159480032"} 2023-07-12 10:58:00,062 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:00,065 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:00,066 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:00,069 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:00,070 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159480065"}]},"ts":"1689159480065"} 2023-07-12 10:58:00,070 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159480069"}]},"ts":"1689159480069"} 2023-07-12 10:58:00,090 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 10:58:00,095 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 10:58:00,099 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:00,099 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:00,099 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:00,099 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:00,099 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:00,101 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, ASSIGN}] 2023-07-12 10:58:00,104 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:00,104 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:00,105 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:00,105 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, ASSIGN 2023-07-12 10:58:00,105 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:00,105 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:00,106 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=078a255e8fc6041e4f36118ed48c04a1, ASSIGN}] 2023-07-12 10:58:00,108 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:00,110 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=078a255e8fc6041e4f36118ed48c04a1, ASSIGN 2023-07-12 10:58:00,112 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=078a255e8fc6041e4f36118ed48c04a1, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:00,113 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:00,115 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:00,115 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=078a255e8fc6041e4f36118ed48c04a1, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:00,115 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159480114"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159480114"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159480114"}]},"ts":"1689159480114"} 2023-07-12 10:58:00,115 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159480114"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159480114"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159480114"}]},"ts":"1689159480114"} 2023-07-12 10:58:00,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:00,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 078a255e8fc6041e4f36118ed48c04a1, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:00,274 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:00,274 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:00,278 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:49348, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:00,308 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:00,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 078a255e8fc6041e4f36118ed48c04a1, NAME => 'hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:00,311 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15c37347446d32713d4f7b66d1f2d533, NAME => 'hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:00,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:00,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:00,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. service=MultiRowMutationService 2023-07-12 10:58:00,313 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:00,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:00,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,318 INFO [StoreOpener-078a255e8fc6041e4f36118ed48c04a1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,318 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,327 DEBUG [StoreOpener-078a255e8fc6041e4f36118ed48c04a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/info 2023-07-12 10:58:00,327 DEBUG [StoreOpener-078a255e8fc6041e4f36118ed48c04a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/info 2023-07-12 10:58:00,328 INFO [StoreOpener-078a255e8fc6041e4f36118ed48c04a1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 078a255e8fc6041e4f36118ed48c04a1 columnFamilyName info 2023-07-12 10:58:00,329 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:00,329 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:00,330 INFO [StoreOpener-078a255e8fc6041e4f36118ed48c04a1-1] regionserver.HStore(310): Store=078a255e8fc6041e4f36118ed48c04a1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:00,331 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15c37347446d32713d4f7b66d1f2d533 columnFamilyName m 2023-07-12 10:58:00,332 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,332 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(310): Store=15c37347446d32713d4f7b66d1f2d533/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:00,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,337 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,338 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:00,338 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:00,342 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 078a255e8fc6041e4f36118ed48c04a1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10706793440, jitterRate=-0.0028521567583084106}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:00,342 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 078a255e8fc6041e4f36118ed48c04a1: 2023-07-12 10:58:00,344 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1., pid=9, masterSystemTime=1689159480282 2023-07-12 10:58:00,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:00,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:00,350 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 15c37347446d32713d4f7b66d1f2d533; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4dee7980, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:00,350 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:00,354 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533., pid=8, masterSystemTime=1689159480273 2023-07-12 10:58:00,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:00,354 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:00,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,360 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=078a255e8fc6041e4f36118ed48c04a1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:00,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:00,361 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159480359"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159480359"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159480359"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159480359"}]},"ts":"1689159480359"} 2023-07-12 10:58:00,362 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:00,362 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159480362"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159480362"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159480362"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159480362"}]},"ts":"1689159480362"} 2023-07-12 10:58:00,378 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 10:58:00,378 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 078a255e8fc6041e4f36118ed48c04a1, server=jenkins-hbase9.apache.org,46327,1689159476604 in 249 msec 2023-07-12 10:58:00,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 10:58:00,381 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,37587,1689159476845 in 253 msec 2023-07-12 10:58:00,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-12 10:58:00,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=078a255e8fc6041e4f36118ed48c04a1, ASSIGN in 272 msec 2023-07-12 10:58:00,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-12 10:58:00,388 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:00,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, ASSIGN in 280 msec 2023-07-12 10:58:00,388 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159480388"}]},"ts":"1689159480388"} 2023-07-12 10:58:00,390 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:00,391 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159480391"}]},"ts":"1689159480391"} 2023-07-12 10:58:00,392 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 10:58:00,393 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 10:58:00,401 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:00,401 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:00,408 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 10:58:00,410 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 611 msec 2023-07-12 10:58:00,410 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 532 msec 2023-07-12 10:58:00,410 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:00,410 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:00,460 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:00,484 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:00,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 45 msec 2023-07-12 10:58:00,505 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:00,507 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:00,512 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:49360, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:00,515 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:00,515 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:00,529 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:00,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 30 msec 2023-07-12 10:58:00,555 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:00,559 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:00,560 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.272sec 2023-07-12 10:58:00,562 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 10:58:00,564 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:00,564 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:00,566 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38969,1689159474659-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:00,567 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38969,1689159474659-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:00,575 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:00,596 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:00,597 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:00,599 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:00,610 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:00,642 DEBUG [Listener at localhost/35677] zookeeper.ReadOnlyZKClient(139): Connect 0x29253406 to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:00,648 DEBUG [Listener at localhost/35677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@702c0ae8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:00,664 DEBUG [hconnection-0x41a50c31-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:00,677 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:00,687 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:58:00,689 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:00,698 DEBUG [Listener at localhost/35677] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:00,703 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:00,719 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:00,719 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:00,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:00,727 DEBUG [Listener at localhost/35677] zookeeper.ReadOnlyZKClient(139): Connect 0x04131a0c to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:00,733 DEBUG [Listener at localhost/35677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43497e15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:00,733 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:58:00,738 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:00,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015920de08000a connected 2023-07-12 10:58:00,784 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=426, OpenFileDescriptor=686, MaxFileDescriptor=60000, SystemLoadAverage=321, ProcessCount=172, AvailableMemoryMB=6841 2023-07-12 10:58:00,787 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 10:58:00,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:00,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:00,863 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:00,880 INFO [Listener at localhost/35677] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:00,880 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:00,881 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:00,881 INFO [Listener at localhost/35677] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:00,881 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:00,881 INFO [Listener at localhost/35677] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:00,881 INFO [Listener at localhost/35677] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:00,887 INFO [Listener at localhost/35677] ipc.NettyRpcServer(120): Bind to /172.31.2.10:34631 2023-07-12 10:58:00,887 INFO [Listener at localhost/35677] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:00,890 DEBUG [Listener at localhost/35677] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:00,892 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:00,897 INFO [Listener at localhost/35677] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:00,901 INFO [Listener at localhost/35677] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34631 connecting to ZooKeeper ensemble=127.0.0.1:52490 2023-07-12 10:58:00,921 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(162): regionserver:346310x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:00,921 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:346310x0, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:00,923 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(162): regionserver:346310x0, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:00,925 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34631-0x1015920de08000b connected 2023-07-12 10:58:00,926 DEBUG [Listener at localhost/35677] zookeeper.ZKUtil(164): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:00,926 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34631 2023-07-12 10:58:00,927 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34631 2023-07-12 10:58:00,927 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34631 2023-07-12 10:58:00,967 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34631 2023-07-12 10:58:00,971 DEBUG [Listener at localhost/35677] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34631 2023-07-12 10:58:00,977 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:00,977 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:00,977 INFO [Listener at localhost/35677] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:00,978 INFO [Listener at localhost/35677] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:00,978 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:00,978 INFO [Listener at localhost/35677] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:00,978 INFO [Listener at localhost/35677] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:00,979 INFO [Listener at localhost/35677] http.HttpServer(1146): Jetty bound to port 44411 2023-07-12 10:58:00,979 INFO [Listener at localhost/35677] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:00,986 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:00,987 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:00,987 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:00,987 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@726126ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:01,113 INFO [Listener at localhost/35677] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:01,114 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:01,114 INFO [Listener at localhost/35677] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:01,114 INFO [Listener at localhost/35677] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:01,116 INFO [Listener at localhost/35677] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:01,117 INFO [Listener at localhost/35677] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@519b7a7c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/java.io.tmpdir/jetty-0_0_0_0-44411-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3325196830699377655/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:01,118 INFO [Listener at localhost/35677] server.AbstractConnector(333): Started ServerConnector@1c038137{HTTP/1.1, (http/1.1)}{0.0.0.0:44411} 2023-07-12 10:58:01,119 INFO [Listener at localhost/35677] server.Server(415): Started @12017ms 2023-07-12 10:58:01,122 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(951): ClusterId : 56cb56be-be6f-4b5a-b417-f46c46fa5458 2023-07-12 10:58:01,122 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:01,126 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:01,126 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:01,128 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:01,129 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ReadOnlyZKClient(139): Connect 0x54244694 to 127.0.0.1:52490 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:01,136 DEBUG [RS:3;jenkins-hbase9:34631] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ae3a260, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:01,137 DEBUG [RS:3;jenkins-hbase9:34631] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5edc2de4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:01,146 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:34631 2023-07-12 10:58:01,146 INFO [RS:3;jenkins-hbase9:34631] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:01,147 INFO [RS:3;jenkins-hbase9:34631] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:01,147 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:01,147 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38969,1689159474659 with isa=jenkins-hbase9.apache.org/172.31.2.10:34631, startcode=1689159480880 2023-07-12 10:58:01,148 DEBUG [RS:3;jenkins-hbase9:34631] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:01,157 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:52761, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:01,158 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38969] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,158 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:01,158 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4 2023-07-12 10:58:01,159 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36583 2023-07-12 10:58:01,159 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34865 2023-07-12 10:58:01,166 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:01,166 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:01,166 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:01,166 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:01,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:01,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:01,167 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,34631,1689159480880] 2023-07-12 10:58:01,168 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:01,168 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ZKUtil(162): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,168 WARN [RS:3;jenkins-hbase9:34631] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:01,168 INFO [RS:3;jenkins-hbase9:34631] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:01,168 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,168 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:01,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:01,170 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38969,1689159474659] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:01,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:01,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:01,189 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ZKUtil(162): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:01,189 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ZKUtil(162): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:01,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:01,190 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ZKUtil(162): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:01,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:01,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:01,192 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ZKUtil(162): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:01,193 DEBUG [RS:3;jenkins-hbase9:34631] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:01,193 INFO [RS:3;jenkins-hbase9:34631] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:01,198 INFO [RS:3;jenkins-hbase9:34631] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:01,198 INFO [RS:3;jenkins-hbase9:34631] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:01,198 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,204 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:01,206 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,207 DEBUG [RS:3;jenkins-hbase9:34631] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:01,217 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,218 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,218 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,238 INFO [RS:3;jenkins-hbase9:34631] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:01,238 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,34631,1689159480880-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:01,254 INFO [RS:3;jenkins-hbase9:34631] regionserver.Replication(203): jenkins-hbase9.apache.org,34631,1689159480880 started 2023-07-12 10:58:01,254 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,34631,1689159480880, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:34631, sessionid=0x1015920de08000b 2023-07-12 10:58:01,254 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:01,254 DEBUG [RS:3;jenkins-hbase9:34631] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,254 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34631,1689159480880' 2023-07-12 10:58:01,254 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34631,1689159480880' 2023-07-12 10:58:01,255 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:01,256 DEBUG [RS:3;jenkins-hbase9:34631] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:01,256 DEBUG [RS:3;jenkins-hbase9:34631] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:01,256 INFO [RS:3;jenkins-hbase9:34631] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:01,257 INFO [RS:3;jenkins-hbase9:34631] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:01,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:01,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:01,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:01,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:01,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:01,273 DEBUG [hconnection-0x2be10837-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:01,276 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55464, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:01,280 DEBUG [hconnection-0x2be10837-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:01,283 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:49366, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:01,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:01,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:01,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:01,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:01,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:34382 deadline: 1689160681295, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:01,298 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:01,300 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:01,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:01,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:01,302 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:01,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:01,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:01,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:01,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:01,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:01,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:01,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:01,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:01,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:01,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:01,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:01,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:01,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:01,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:01,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:01,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:01,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:01,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(238): Moving server region 15c37347446d32713d4f7b66d1f2d533, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:01,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE 2023-07-12 10:58:01,345 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE 2023-07-12 10:58:01,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 10:58:01,347 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:01,347 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159481347"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159481347"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159481347"}]},"ts":"1689159481347"} 2023-07-12 10:58:01,350 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:01,360 INFO [RS:3;jenkins-hbase9:34631] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C34631%2C1689159480880, suffix=, logDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,34631,1689159480880, archiveDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs, maxLogs=32 2023-07-12 10:58:01,392 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK] 2023-07-12 10:58:01,392 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK] 2023-07-12 10:58:01,392 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK] 2023-07-12 10:58:01,397 INFO [RS:3;jenkins-hbase9:34631] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/WALs/jenkins-hbase9.apache.org,34631,1689159480880/jenkins-hbase9.apache.org%2C34631%2C1689159480880.1689159481361 2023-07-12 10:58:01,397 DEBUG [RS:3;jenkins-hbase9:34631] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35873,DS-3ff1b66d-edec-4e67-b225-1d5d86044781,DISK], DatanodeInfoWithStorage[127.0.0.1:37303,DS-887fe223-a673-4cfb-b1ce-68110d962dc9,DISK], DatanodeInfoWithStorage[127.0.0.1:37045,DS-41b51c19-ddad-49ba-bf72-db3ca83d1657,DISK]] 2023-07-12 10:58:01,516 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:01,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 15c37347446d32713d4f7b66d1f2d533, disabling compactions & flushes 2023-07-12 10:58:01,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:01,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:01,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. after waiting 0 ms 2023-07-12 10:58:01,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:01,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 15c37347446d32713d4f7b66d1f2d533 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-12 10:58:01,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/fa03ad3aba974bcb876fb4ea7607a1c9 2023-07-12 10:58:01,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/fa03ad3aba974bcb876fb4ea7607a1c9 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/fa03ad3aba974bcb876fb4ea7607a1c9 2023-07-12 10:58:01,678 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/fa03ad3aba974bcb876fb4ea7607a1c9, entries=3, sequenceid=9, filesize=5.2 K 2023-07-12 10:58:01,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 15c37347446d32713d4f7b66d1f2d533 in 164ms, sequenceid=9, compaction requested=false 2023-07-12 10:58:01,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:01,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 10:58:01,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:01,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:01,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:01,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 15c37347446d32713d4f7b66d1f2d533 move to jenkins-hbase9.apache.org,39937,1689159477046 record at close sequenceid=9 2023-07-12 10:58:01,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:01,703 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=CLOSED 2023-07-12 10:58:01,703 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159481703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159481703"}]},"ts":"1689159481703"} 2023-07-12 10:58:01,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 10:58:01,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,37587,1689159476845 in 355 msec 2023-07-12 10:58:01,709 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:01,860 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:01,860 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:01,861 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159481860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159481860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159481860"}]},"ts":"1689159481860"} 2023-07-12 10:58:01,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:02,018 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:02,019 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:02,020 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:02,026 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:02,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15c37347446d32713d4f7b66d1f2d533, NAME => 'hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:02,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:02,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. service=MultiRowMutationService 2023-07-12 10:58:02,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:02,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,030 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,031 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:02,031 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:02,032 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15c37347446d32713d4f7b66d1f2d533 columnFamilyName m 2023-07-12 10:58:02,047 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(539): loaded hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/fa03ad3aba974bcb876fb4ea7607a1c9 2023-07-12 10:58:02,048 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(310): Store=15c37347446d32713d4f7b66d1f2d533/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:02,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,057 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:02,058 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 15c37347446d32713d4f7b66d1f2d533; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@567080a7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:02,058 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:02,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533., pid=14, masterSystemTime=1689159482018 2023-07-12 10:58:02,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:02,069 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:02,070 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:02,070 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159482070"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159482070"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159482070"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159482070"}]},"ts":"1689159482070"} 2023-07-12 10:58:02,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 10:58:02,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,39937,1689159477046 in 210 msec 2023-07-12 10:58:02,080 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE in 734 msec 2023-07-12 10:58:02,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 10:58:02,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to default 2023-07-12 10:58:02,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:02,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:02,349 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37587] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:49366 deadline: 1689159542348, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=39937 startCode=1689159477046. As of locationSeqNum=9. 2023-07-12 10:58:02,455 DEBUG [hconnection-0x2be10837-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:02,459 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60682, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:02,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:02,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:02,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:02,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:02,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:02,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:02,497 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:02,500 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37587] ipc.CallRunner(144): callId: 43 service: ClientService methodName: ExecService size: 622 connection: 172.31.2.10:49360 deadline: 1689159542499, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=39937 startCode=1689159477046. As of locationSeqNum=9. 2023-07-12 10:58:02,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-12 10:58:02,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:02,604 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:02,605 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:02,617 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:02,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:02,618 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:02,619 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:02,619 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:02,625 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:02,632 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:02,632 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:02,632 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:02,632 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:02,633 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:02,634 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 empty. 2023-07-12 10:58:02,634 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d empty. 2023-07-12 10:58:02,635 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d empty. 2023-07-12 10:58:02,635 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa empty. 2023-07-12 10:58:02,635 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:02,635 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 empty. 2023-07-12 10:58:02,636 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:02,636 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:02,636 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:02,637 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:02,637 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 10:58:02,667 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:02,669 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 412729755b679808198dfc6f2cac0939, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:02,669 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1ce1fedd657a7f04d1736cd5b684ce16, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:02,669 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 534c3379f9ac30b86b8e483e6736505d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:02,762 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 534c3379f9ac30b86b8e483e6736505d, disabling compactions & flushes 2023-07-12 10:58:02,763 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:02,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:02,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. after waiting 0 ms 2023-07-12 10:58:02,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:02,763 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:02,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 534c3379f9ac30b86b8e483e6736505d: 2023-07-12 10:58:02,764 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3069d9dd86fbf80353accb9185973ffa, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:02,764 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 412729755b679808198dfc6f2cac0939, disabling compactions & flushes 2023-07-12 10:58:02,765 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:02,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,767 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1ce1fedd657a7f04d1736cd5b684ce16, disabling compactions & flushes 2023-07-12 10:58:02,767 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:02,767 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:02,767 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. after waiting 0 ms 2023-07-12 10:58:02,768 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:02,768 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:02,768 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1ce1fedd657a7f04d1736cd5b684ce16: 2023-07-12 10:58:02,768 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 38f5a5f63eff3aced0270f7f731c4d3d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:02,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:02,769 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. after waiting 0 ms 2023-07-12 10:58:02,769 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:02,769 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:02,769 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 412729755b679808198dfc6f2cac0939: 2023-07-12 10:58:02,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:02,832 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,832 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 3069d9dd86fbf80353accb9185973ffa, disabling compactions & flushes 2023-07-12 10:58:02,832 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:02,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:02,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. after waiting 0 ms 2023-07-12 10:58:02,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:02,833 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:02,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 3069d9dd86fbf80353accb9185973ffa: 2023-07-12 10:58:02,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:02,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 38f5a5f63eff3aced0270f7f731c4d3d, disabling compactions & flushes 2023-07-12 10:58:02,850 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:02,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:02,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. after waiting 0 ms 2023-07-12 10:58:02,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:02,851 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:02,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 38f5a5f63eff3aced0270f7f731c4d3d: 2023-07-12 10:58:02,856 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:02,857 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159482857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159482857"}]},"ts":"1689159482857"} 2023-07-12 10:58:02,857 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159482857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159482857"}]},"ts":"1689159482857"} 2023-07-12 10:58:02,858 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159482857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159482857"}]},"ts":"1689159482857"} 2023-07-12 10:58:02,858 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159482857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159482857"}]},"ts":"1689159482857"} 2023-07-12 10:58:02,858 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159482857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159482857"}]},"ts":"1689159482857"} 2023-07-12 10:58:02,941 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 10:58:02,943 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:02,943 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159482943"}]},"ts":"1689159482943"} 2023-07-12 10:58:02,947 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 10:58:02,960 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:02,961 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:02,961 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:02,961 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:02,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, ASSIGN}] 2023-07-12 10:58:02,966 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, ASSIGN 2023-07-12 10:58:02,966 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, ASSIGN 2023-07-12 10:58:02,967 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, ASSIGN 2023-07-12 10:58:02,968 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, ASSIGN 2023-07-12 10:58:02,970 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, ASSIGN 2023-07-12 10:58:02,972 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:02,972 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:02,972 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:02,972 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:02,973 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:03,123 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 10:58:03,128 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,128 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,128 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,128 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,128 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,129 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483128"}]},"ts":"1689159483128"} 2023-07-12 10:58:03,129 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483128"}]},"ts":"1689159483128"} 2023-07-12 10:58:03,129 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483128"}]},"ts":"1689159483128"} 2023-07-12 10:58:03,129 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483128"}]},"ts":"1689159483128"} 2023-07-12 10:58:03,129 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483128"}]},"ts":"1689159483128"} 2023-07-12 10:58:03,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:03,135 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:03,142 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=19, state=RUNNABLE; OpenRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:03,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,146 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=20, state=RUNNABLE; OpenRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 38f5a5f63eff3aced0270f7f731c4d3d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 10:58:03,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:03,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,294 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,296 DEBUG [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/f 2023-07-12 10:58:03,297 DEBUG [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/f 2023-07-12 10:58:03,297 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 38f5a5f63eff3aced0270f7f731c4d3d columnFamilyName f 2023-07-12 10:58:03,298 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] regionserver.HStore(310): Store=38f5a5f63eff3aced0270f7f731c4d3d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:03,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3069d9dd86fbf80353accb9185973ffa, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 10:58:03,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:03,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,306 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,308 DEBUG [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/f 2023-07-12 10:58:03,308 DEBUG [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/f 2023-07-12 10:58:03,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,308 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3069d9dd86fbf80353accb9185973ffa columnFamilyName f 2023-07-12 10:58:03,309 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] regionserver.HStore(310): Store=3069d9dd86fbf80353accb9185973ffa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:03,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:03,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 38f5a5f63eff3aced0270f7f731c4d3d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9774520960, jitterRate=-0.08967679738998413}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:03,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 38f5a5f63eff3aced0270f7f731c4d3d: 2023-07-12 10:58:03,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d., pid=25, masterSystemTime=1689159483287 2023-07-12 10:58:03,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 412729755b679808198dfc6f2cac0939, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 10:58:03,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:03,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,321 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,321 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483321"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159483321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159483321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159483321"}]},"ts":"1689159483321"} 2023-07-12 10:58:03,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:03,324 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 3069d9dd86fbf80353accb9185973ffa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10360747520, jitterRate=-0.0350801944732666}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:03,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 3069d9dd86fbf80353accb9185973ffa: 2023-07-12 10:58:03,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa., pid=23, masterSystemTime=1689159483294 2023-07-12 10:58:03,327 DEBUG [StoreOpener-412729755b679808198dfc6f2cac0939-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/f 2023-07-12 10:58:03,327 DEBUG [StoreOpener-412729755b679808198dfc6f2cac0939-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/f 2023-07-12 10:58:03,328 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 412729755b679808198dfc6f2cac0939 columnFamilyName f 2023-07-12 10:58:03,328 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] regionserver.HStore(310): Store=412729755b679808198dfc6f2cac0939/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:03,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=20 2023-07-12 10:58:03,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=20, state=SUCCESS; OpenRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,46327,1689159476604 in 179 msec 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ce1fedd657a7f04d1736cd5b684ce16, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:03,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,332 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,332 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483331"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159483331"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159483331"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159483331"}]},"ts":"1689159483331"} 2023-07-12 10:58:03,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, ASSIGN in 369 msec 2023-07-12 10:58:03,334 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,337 DEBUG [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/f 2023-07-12 10:58:03,337 DEBUG [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/f 2023-07-12 10:58:03,337 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ce1fedd657a7f04d1736cd5b684ce16 columnFamilyName f 2023-07-12 10:58:03,339 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] regionserver.HStore(310): Store=1ce1fedd657a7f04d1736cd5b684ce16/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:03,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=19 2023-07-12 10:58:03,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=19, state=SUCCESS; OpenRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,39937,1689159477046 in 193 msec 2023-07-12 10:58:03,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, ASSIGN in 380 msec 2023-07-12 10:58:03,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:03,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:03,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 412729755b679808198dfc6f2cac0939; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9668750400, jitterRate=-0.09952744841575623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:03,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 412729755b679808198dfc6f2cac0939: 2023-07-12 10:58:03,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1ce1fedd657a7f04d1736cd5b684ce16; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10836252480, jitterRate=0.009204655885696411}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:03,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1ce1fedd657a7f04d1736cd5b684ce16: 2023-07-12 10:58:03,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16., pid=22, masterSystemTime=1689159483294 2023-07-12 10:58:03,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939., pid=21, masterSystemTime=1689159483287 2023-07-12 10:58:03,357 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,357 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483357"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159483357"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159483357"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159483357"}]},"ts":"1689159483357"} 2023-07-12 10:58:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 534c3379f9ac30b86b8e483e6736505d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 10:58:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:03,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,360 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,362 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483360"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159483360"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159483360"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159483360"}]},"ts":"1689159483360"} 2023-07-12 10:58:03,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-12 10:58:03,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,39937,1689159477046 in 219 msec 2023-07-12 10:58:03,367 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, ASSIGN in 404 msec 2023-07-12 10:58:03,368 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-12 10:58:03,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,46327,1689159476604 in 230 msec 2023-07-12 10:58:03,370 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,371 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, ASSIGN in 407 msec 2023-07-12 10:58:03,372 DEBUG [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/f 2023-07-12 10:58:03,372 DEBUG [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/f 2023-07-12 10:58:03,373 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 534c3379f9ac30b86b8e483e6736505d columnFamilyName f 2023-07-12 10:58:03,374 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] regionserver.HStore(310): Store=534c3379f9ac30b86b8e483e6736505d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:03,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:03,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 534c3379f9ac30b86b8e483e6736505d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9967043360, jitterRate=-0.07174675166606903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:03,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 534c3379f9ac30b86b8e483e6736505d: 2023-07-12 10:58:03,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d., pid=24, masterSystemTime=1689159483287 2023-07-12 10:58:03,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,386 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,387 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483386"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159483386"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159483386"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159483386"}]},"ts":"1689159483386"} 2023-07-12 10:58:03,391 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-12 10:58:03,391 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,46327,1689159476604 in 244 msec 2023-07-12 10:58:03,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-12 10:58:03,394 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, ASSIGN in 430 msec 2023-07-12 10:58:03,395 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:03,395 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159483395"}]},"ts":"1689159483395"} 2023-07-12 10:58:03,397 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 10:58:03,401 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:03,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 909 msec 2023-07-12 10:58:03,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:03,635 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-12 10:58:03,636 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 10:58:03,637 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:03,642 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 10:58:03,643 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:03,643 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 10:58:03,643 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:03,648 DEBUG [Listener at localhost/35677] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:03,651 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:37814, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:03,653 DEBUG [Listener at localhost/35677] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:03,656 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:52686, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:03,656 DEBUG [Listener at localhost/35677] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:03,659 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60698, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:03,661 DEBUG [Listener at localhost/35677] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:03,663 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:56538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:03,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:03,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:03,674 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:03,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:03,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:03,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 412729755b679808198dfc6f2cac0939 to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:03,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:03,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:03,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:03,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:03,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, REOPEN/MOVE 2023-07-12 10:58:03,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 1ce1fedd657a7f04d1736cd5b684ce16 to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,693 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, REOPEN/MOVE 2023-07-12 10:58:03,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:03,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:03,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:03,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:03,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:03,694 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483694"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483694"}]},"ts":"1689159483694"} 2023-07-12 10:58:03,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, REOPEN/MOVE 2023-07-12 10:58:03,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 534c3379f9ac30b86b8e483e6736505d to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:03,696 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, REOPEN/MOVE 2023-07-12 10:58:03,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:03,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:03,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:03,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:03,698 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,698 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483698"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483698"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483698"}]},"ts":"1689159483698"} 2023-07-12 10:58:03,698 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, REOPEN/MOVE 2023-07-12 10:58:03,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 3069d9dd86fbf80353accb9185973ffa to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:03,700 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, REOPEN/MOVE 2023-07-12 10:58:03,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:03,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:03,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:03,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:03,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:03,702 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, REOPEN/MOVE 2023-07-12 10:58:03,703 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483702"}]},"ts":"1689159483702"} 2023-07-12 10:58:03,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 38f5a5f63eff3aced0270f7f731c4d3d to RSGroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:03,703 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, REOPEN/MOVE 2023-07-12 10:58:03,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:03,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:03,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:03,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:03,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:03,706 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:03,706 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483705"}]},"ts":"1689159483705"} 2023-07-12 10:58:03,706 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, REOPEN/MOVE 2023-07-12 10:58:03,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1238877202, current retry=0 2023-07-12 10:58:03,708 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, REOPEN/MOVE 2023-07-12 10:58:03,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:03,711 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:03,711 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483711"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159483711"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159483711"}]},"ts":"1689159483711"} 2023-07-12 10:58:03,714 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:03,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 38f5a5f63eff3aced0270f7f731c4d3d, disabling compactions & flushes 2023-07-12 10:58:03,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. after waiting 0 ms 2023-07-12 10:58:03,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1ce1fedd657a7f04d1736cd5b684ce16, disabling compactions & flushes 2023-07-12 10:58:03,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. after waiting 0 ms 2023-07-12 10:58:03,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:03,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:03,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 38f5a5f63eff3aced0270f7f731c4d3d: 2023-07-12 10:58:03,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 38f5a5f63eff3aced0270f7f731c4d3d move to jenkins-hbase9.apache.org,37587,1689159476845 record at close sequenceid=2 2023-07-12 10:58:03,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:03,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:03,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1ce1fedd657a7f04d1736cd5b684ce16: 2023-07-12 10:58:03,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 1ce1fedd657a7f04d1736cd5b684ce16 move to jenkins-hbase9.apache.org,37587,1689159476845 record at close sequenceid=2 2023-07-12 10:58:03,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:03,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 412729755b679808198dfc6f2cac0939, disabling compactions & flushes 2023-07-12 10:58:03,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. after waiting 0 ms 2023-07-12 10:58:03,876 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=CLOSED 2023-07-12 10:58:03,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,877 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483876"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159483876"}]},"ts":"1689159483876"} 2023-07-12 10:58:03,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:03,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 3069d9dd86fbf80353accb9185973ffa, disabling compactions & flushes 2023-07-12 10:58:03,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. after waiting 0 ms 2023-07-12 10:58:03,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,895 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=CLOSED 2023-07-12 10:58:03,895 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483895"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159483895"}]},"ts":"1689159483895"} 2023-07-12 10:58:03,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-12 10:58:03,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,46327,1689159476604 in 202 msec 2023-07-12 10:58:03,921 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:03,926 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-12 10:58:03,926 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,39937,1689159477046 in 219 msec 2023-07-12 10:58:03,927 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:03,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:03,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:03,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:03,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 412729755b679808198dfc6f2cac0939: 2023-07-12 10:58:03,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 412729755b679808198dfc6f2cac0939 move to jenkins-hbase9.apache.org,37587,1689159476845 record at close sequenceid=2 2023-07-12 10:58:03,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:03,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 3069d9dd86fbf80353accb9185973ffa: 2023-07-12 10:58:03,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 3069d9dd86fbf80353accb9185973ffa move to jenkins-hbase9.apache.org,34631,1689159480880 record at close sequenceid=2 2023-07-12 10:58:03,947 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=CLOSED 2023-07-12 10:58:03,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:03,947 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159483945"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159483945"}]},"ts":"1689159483945"} 2023-07-12 10:58:03,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 534c3379f9ac30b86b8e483e6736505d, disabling compactions & flushes 2023-07-12 10:58:03,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. after waiting 0 ms 2023-07-12 10:58:03,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:03,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,949 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=CLOSED 2023-07-12 10:58:03,949 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483949"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159483949"}]},"ts":"1689159483949"} 2023-07-12 10:58:03,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-12 10:58:03,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,46327,1689159476604 in 252 msec 2023-07-12 10:58:03,958 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:03,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-12 10:58:03,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,39937,1689159477046 in 241 msec 2023-07-12 10:58:03,961 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:03,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:03,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:03,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 534c3379f9ac30b86b8e483e6736505d: 2023-07-12 10:58:03,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 534c3379f9ac30b86b8e483e6736505d move to jenkins-hbase9.apache.org,37587,1689159476845 record at close sequenceid=2 2023-07-12 10:58:03,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:03,971 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=CLOSED 2023-07-12 10:58:03,972 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159483971"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159483971"}]},"ts":"1689159483971"} 2023-07-12 10:58:03,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-12 10:58:03,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,46327,1689159476604 in 268 msec 2023-07-12 10:58:03,979 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:04,071 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 10:58:04,072 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,072 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,072 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,072 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,072 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484072"}]},"ts":"1689159484072"} 2023-07-12 10:58:04,072 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484072"}]},"ts":"1689159484072"} 2023-07-12 10:58:04,072 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:04,072 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484072"}]},"ts":"1689159484072"} 2023-07-12 10:58:04,073 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484072"}]},"ts":"1689159484072"} 2023-07-12 10:58:04,073 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484072"}]},"ts":"1689159484072"} 2023-07-12 10:58:04,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=26, state=RUNNABLE; OpenRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,078 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=32, state=RUNNABLE; OpenRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,082 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=30, state=RUNNABLE; OpenRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:04,083 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=27, state=RUNNABLE; OpenRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ce1fedd657a7f04d1736cd5b684ce16, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 10:58:04,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:04,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,237 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:04,237 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:04,239 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,239 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:37828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:04,240 DEBUG [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/f 2023-07-12 10:58:04,241 DEBUG [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/f 2023-07-12 10:58:04,242 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ce1fedd657a7f04d1736cd5b684ce16 columnFamilyName f 2023-07-12 10:58:04,243 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3069d9dd86fbf80353accb9185973ffa, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 10:58:04,243 INFO [StoreOpener-1ce1fedd657a7f04d1736cd5b684ce16-1] regionserver.HStore(310): Store=1ce1fedd657a7f04d1736cd5b684ce16/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:04,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:04,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,248 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,249 DEBUG [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/f 2023-07-12 10:58:04,249 DEBUG [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/f 2023-07-12 10:58:04,250 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3069d9dd86fbf80353accb9185973ffa columnFamilyName f 2023-07-12 10:58:04,251 INFO [StoreOpener-3069d9dd86fbf80353accb9185973ffa-1] regionserver.HStore(310): Store=3069d9dd86fbf80353accb9185973ffa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:04,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1ce1fedd657a7f04d1736cd5b684ce16; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9508033120, jitterRate=-0.1144954115152359}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:04,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1ce1fedd657a7f04d1736cd5b684ce16: 2023-07-12 10:58:04,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16., pid=40, masterSystemTime=1689159484229 2023-07-12 10:58:04,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 38f5a5f63eff3aced0270f7f731c4d3d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 10:58:04,259 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,259 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484259"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159484259"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159484259"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159484259"}]},"ts":"1689159484259"} 2023-07-12 10:58:04,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:04,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,263 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,264 DEBUG [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/f 2023-07-12 10:58:04,264 DEBUG [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/f 2023-07-12 10:58:04,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=27 2023-07-12 10:58:04,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=27, state=SUCCESS; OpenRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,37587,1689159476845 in 179 msec 2023-07-12 10:58:04,265 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 38f5a5f63eff3aced0270f7f731c4d3d columnFamilyName f 2023-07-12 10:58:04,265 INFO [StoreOpener-38f5a5f63eff3aced0270f7f731c4d3d-1] regionserver.HStore(310): Store=38f5a5f63eff3aced0270f7f731c4d3d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:04,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 3069d9dd86fbf80353accb9185973ffa; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10468051360, jitterRate=-0.025086745619773865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:04,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 3069d9dd86fbf80353accb9185973ffa: 2023-07-12 10:58:04,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, REOPEN/MOVE in 571 msec 2023-07-12 10:58:04,271 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa., pid=39, masterSystemTime=1689159484237 2023-07-12 10:58:04,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,276 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:04,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,277 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484276"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159484276"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159484276"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159484276"}]},"ts":"1689159484276"} 2023-07-12 10:58:04,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 38f5a5f63eff3aced0270f7f731c4d3d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11569098240, jitterRate=0.07745623588562012}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:04,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 38f5a5f63eff3aced0270f7f731c4d3d: 2023-07-12 10:58:04,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d., pid=38, masterSystemTime=1689159484229 2023-07-12 10:58:04,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,281 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 412729755b679808198dfc6f2cac0939, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 10:58:04,282 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484281"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159484281"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159484281"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159484281"}]},"ts":"1689159484281"} 2023-07-12 10:58:04,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:04,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,283 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=30 2023-07-12 10:58:04,283 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=30, state=SUCCESS; OpenRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,34631,1689159480880 in 197 msec 2023-07-12 10:58:04,284 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,286 DEBUG [StoreOpener-412729755b679808198dfc6f2cac0939-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/f 2023-07-12 10:58:04,286 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, REOPEN/MOVE in 583 msec 2023-07-12 10:58:04,286 DEBUG [StoreOpener-412729755b679808198dfc6f2cac0939-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/f 2023-07-12 10:58:04,287 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 412729755b679808198dfc6f2cac0939 columnFamilyName f 2023-07-12 10:58:04,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=32 2023-07-12 10:58:04,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=32, state=SUCCESS; OpenRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,37587,1689159476845 in 203 msec 2023-07-12 10:58:04,287 INFO [StoreOpener-412729755b679808198dfc6f2cac0939-1] regionserver.HStore(310): Store=412729755b679808198dfc6f2cac0939/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:04,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, REOPEN/MOVE in 583 msec 2023-07-12 10:58:04,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 412729755b679808198dfc6f2cac0939; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11444276320, jitterRate=0.06583128869533539}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:04,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 412729755b679808198dfc6f2cac0939: 2023-07-12 10:58:04,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939., pid=36, masterSystemTime=1689159484229 2023-07-12 10:58:04,297 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,297 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,297 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,297 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 534c3379f9ac30b86b8e483e6736505d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 10:58:04,298 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:04,298 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484298"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159484298"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159484298"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159484298"}]},"ts":"1689159484298"} 2023-07-12 10:58:04,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,299 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,301 DEBUG [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/f 2023-07-12 10:58:04,301 DEBUG [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/f 2023-07-12 10:58:04,301 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 534c3379f9ac30b86b8e483e6736505d columnFamilyName f 2023-07-12 10:58:04,302 INFO [StoreOpener-534c3379f9ac30b86b8e483e6736505d-1] regionserver.HStore(310): Store=534c3379f9ac30b86b8e483e6736505d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:04,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,305 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=26 2023-07-12 10:58:04,305 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=26, state=SUCCESS; OpenRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,37587,1689159476845 in 224 msec 2023-07-12 10:58:04,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, REOPEN/MOVE in 614 msec 2023-07-12 10:58:04,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,316 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 534c3379f9ac30b86b8e483e6736505d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9489528800, jitterRate=-0.11621876060962677}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:04,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 534c3379f9ac30b86b8e483e6736505d: 2023-07-12 10:58:04,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d., pid=37, masterSystemTime=1689159484229 2023-07-12 10:58:04,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,320 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,321 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484320"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159484320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159484320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159484320"}]},"ts":"1689159484320"} 2023-07-12 10:58:04,325 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-12 10:58:04,325 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,37587,1689159476845 in 244 msec 2023-07-12 10:58:04,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, REOPEN/MOVE in 628 msec 2023-07-12 10:58:04,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-12 10:58:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1238877202. 2023-07-12 10:58:04,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:04,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:04,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:04,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:04,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:04,721 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:04,728 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:04,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:04,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:04,752 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159484751"}]},"ts":"1689159484751"} 2023-07-12 10:58:04,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 10:58:04,753 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 10:58:04,755 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 10:58:04,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, UNASSIGN}] 2023-07-12 10:58:04,760 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, UNASSIGN 2023-07-12 10:58:04,760 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, UNASSIGN 2023-07-12 10:58:04,760 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, UNASSIGN 2023-07-12 10:58:04,761 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, UNASSIGN 2023-07-12 10:58:04,761 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, UNASSIGN 2023-07-12 10:58:04,766 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:04,766 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,766 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,767 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484766"}]},"ts":"1689159484766"} 2023-07-12 10:58:04,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484766"}]},"ts":"1689159484766"} 2023-07-12 10:58:04,767 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484766"}]},"ts":"1689159484766"} 2023-07-12 10:58:04,769 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=46, state=RUNNABLE; CloseRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,767 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,767 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:04,769 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484767"}]},"ts":"1689159484767"} 2023-07-12 10:58:04,770 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159484767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159484767"}]},"ts":"1689159484767"} 2023-07-12 10:58:04,771 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,771 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=45, state=RUNNABLE; CloseRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:04,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=42, state=RUNNABLE; CloseRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=44, state=RUNNABLE; CloseRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:04,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 10:58:04,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 534c3379f9ac30b86b8e483e6736505d, disabling compactions & flushes 2023-07-12 10:58:04,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. after waiting 0 ms 2023-07-12 10:58:04,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 3069d9dd86fbf80353accb9185973ffa, disabling compactions & flushes 2023-07-12 10:58:04,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. after waiting 0 ms 2023-07-12 10:58:04,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:04,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d. 2023-07-12 10:58:04,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 534c3379f9ac30b86b8e483e6736505d: 2023-07-12 10:58:04,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:04,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa. 2023-07-12 10:58:04,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 3069d9dd86fbf80353accb9185973ffa: 2023-07-12 10:58:04,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:04,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 38f5a5f63eff3aced0270f7f731c4d3d, disabling compactions & flushes 2023-07-12 10:58:04,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. after waiting 0 ms 2023-07-12 10:58:04,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,959 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=534c3379f9ac30b86b8e483e6736505d, regionState=CLOSED 2023-07-12 10:58:04,960 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159484959"}]},"ts":"1689159484959"} 2023-07-12 10:58:04,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:04,960 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=3069d9dd86fbf80353accb9185973ffa, regionState=CLOSED 2023-07-12 10:58:04,961 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159484960"}]},"ts":"1689159484960"} 2023-07-12 10:58:04,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=44 2023-07-12 10:58:04,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=44, state=SUCCESS; CloseRegionProcedure 534c3379f9ac30b86b8e483e6736505d, server=jenkins-hbase9.apache.org,37587,1689159476845 in 188 msec 2023-07-12 10:58:04,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:04,968 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=45 2023-07-12 10:58:04,968 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; CloseRegionProcedure 3069d9dd86fbf80353accb9185973ffa, server=jenkins-hbase9.apache.org,34631,1689159480880 in 193 msec 2023-07-12 10:58:04,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d. 2023-07-12 10:58:04,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 38f5a5f63eff3aced0270f7f731c4d3d: 2023-07-12 10:58:04,970 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=534c3379f9ac30b86b8e483e6736505d, UNASSIGN in 209 msec 2023-07-12 10:58:04,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3069d9dd86fbf80353accb9185973ffa, UNASSIGN in 211 msec 2023-07-12 10:58:04,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:04,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1ce1fedd657a7f04d1736cd5b684ce16, disabling compactions & flushes 2023-07-12 10:58:04,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. after waiting 0 ms 2023-07-12 10:58:04,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,974 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=38f5a5f63eff3aced0270f7f731c4d3d, regionState=CLOSED 2023-07-12 10:58:04,974 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484974"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159484974"}]},"ts":"1689159484974"} 2023-07-12 10:58:04,981 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=46 2023-07-12 10:58:04,981 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=46, state=SUCCESS; CloseRegionProcedure 38f5a5f63eff3aced0270f7f731c4d3d, server=jenkins-hbase9.apache.org,37587,1689159476845 in 208 msec 2023-07-12 10:58:04,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:04,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16. 2023-07-12 10:58:04,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1ce1fedd657a7f04d1736cd5b684ce16: 2023-07-12 10:58:04,983 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=38f5a5f63eff3aced0270f7f731c4d3d, UNASSIGN in 224 msec 2023-07-12 10:58:04,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:04,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 412729755b679808198dfc6f2cac0939, disabling compactions & flushes 2023-07-12 10:58:04,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. after waiting 0 ms 2023-07-12 10:58:04,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,985 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1ce1fedd657a7f04d1736cd5b684ce16, regionState=CLOSED 2023-07-12 10:58:04,985 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159484985"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159484985"}]},"ts":"1689159484985"} 2023-07-12 10:58:04,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-12 10:58:04,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 1ce1fedd657a7f04d1736cd5b684ce16, server=jenkins-hbase9.apache.org,37587,1689159476845 in 217 msec 2023-07-12 10:58:04,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:04,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939. 2023-07-12 10:58:04,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 412729755b679808198dfc6f2cac0939: 2023-07-12 10:58:04,993 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1ce1fedd657a7f04d1736cd5b684ce16, UNASSIGN in 234 msec 2023-07-12 10:58:04,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 412729755b679808198dfc6f2cac0939 2023-07-12 10:58:04,995 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=412729755b679808198dfc6f2cac0939, regionState=CLOSED 2023-07-12 10:58:04,995 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159484995"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159484995"}]},"ts":"1689159484995"} 2023-07-12 10:58:05,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=42 2023-07-12 10:58:05,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=42, state=SUCCESS; CloseRegionProcedure 412729755b679808198dfc6f2cac0939, server=jenkins-hbase9.apache.org,37587,1689159476845 in 225 msec 2023-07-12 10:58:05,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=41 2023-07-12 10:58:05,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=412729755b679808198dfc6f2cac0939, UNASSIGN in 244 msec 2023-07-12 10:58:05,004 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159485004"}]},"ts":"1689159485004"} 2023-07-12 10:58:05,006 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 10:58:05,009 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 10:58:05,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 274 msec 2023-07-12 10:58:05,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 10:58:05,056 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-12 10:58:05,057 INFO [Listener at localhost/35677] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:05,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$6(2260): Client=jenkins//172.31.2.10 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:05,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 10:58:05,074 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 10:58:05,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 10:58:05,088 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:05,088 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:05,088 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:05,088 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:05,088 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:05,094 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits] 2023-07-12 10:58:05,094 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits] 2023-07-12 10:58:05,094 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits] 2023-07-12 10:58:05,094 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits] 2023-07-12 10:58:05,096 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits] 2023-07-12 10:58:05,114 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa/recovered.edits/7.seqid 2023-07-12 10:58:05,114 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16/recovered.edits/7.seqid 2023-07-12 10:58:05,115 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d/recovered.edits/7.seqid 2023-07-12 10:58:05,116 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d/recovered.edits/7.seqid 2023-07-12 10:58:05,116 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1ce1fedd657a7f04d1736cd5b684ce16 2023-07-12 10:58:05,116 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3069d9dd86fbf80353accb9185973ffa 2023-07-12 10:58:05,117 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/534c3379f9ac30b86b8e483e6736505d 2023-07-12 10:58:05,117 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/38f5a5f63eff3aced0270f7f731c4d3d 2023-07-12 10:58:05,118 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939/recovered.edits/7.seqid 2023-07-12 10:58:05,119 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/412729755b679808198dfc6f2cac0939 2023-07-12 10:58:05,119 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 10:58:05,123 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:05,172 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 10:58:05,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 10:58:05,190 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159485191"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159485191"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159485191"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159485191"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159485191"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,195 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 10:58:05,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 412729755b679808198dfc6f2cac0939, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159482489.412729755b679808198dfc6f2cac0939.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1ce1fedd657a7f04d1736cd5b684ce16, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159482489.1ce1fedd657a7f04d1736cd5b684ce16.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 534c3379f9ac30b86b8e483e6736505d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159482489.534c3379f9ac30b86b8e483e6736505d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3069d9dd86fbf80353accb9185973ffa, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159482489.3069d9dd86fbf80353accb9185973ffa.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 38f5a5f63eff3aced0270f7f731c4d3d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159482489.38f5a5f63eff3aced0270f7f731c4d3d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 10:58:05,196 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 10:58:05,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159485196"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:05,202 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 10:58:05,216 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,216 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,216 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,216 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,217 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,217 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 empty. 2023-07-12 10:58:05,218 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf empty. 2023-07-12 10:58:05,218 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 empty. 2023-07-12 10:58:05,218 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 empty. 2023-07-12 10:58:05,218 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 empty. 2023-07-12 10:58:05,218 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,219 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,219 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,219 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,220 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,220 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 10:58:05,227 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:05,227 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 10:58:05,228 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:05,228 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 10:58:05,228 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:05,228 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 10:58:05,229 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:05,230 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:05,253 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:05,256 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 66ad69ea3e9dbe003d717b5c2042a6c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:05,259 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d1f2fa8d6c1a4f9192f68614dabcd2e7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:05,259 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6395ade763b737d33b8e61918c5b36d1, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:05,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 66ad69ea3e9dbe003d717b5c2042a6c4, disabling compactions & flushes 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 6395ade763b737d33b8e61918c5b36d1, disabling compactions & flushes 2023-07-12 10:58:05,350 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,350 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. after waiting 0 ms 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,350 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 66ad69ea3e9dbe003d717b5c2042a6c4: 2023-07-12 10:58:05,350 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,351 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1440130602c150be12010ef49b642cd3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:05,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. after waiting 0 ms 2023-07-12 10:58:05,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,351 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 6395ade763b737d33b8e61918c5b36d1: 2023-07-12 10:58:05,352 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f0bb4bf8c41d01214285263b44c56fdf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d1f2fa8d6c1a4f9192f68614dabcd2e7, disabling compactions & flushes 2023-07-12 10:58:05,364 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. after waiting 0 ms 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,364 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d1f2fa8d6c1a4f9192f68614dabcd2e7: 2023-07-12 10:58:05,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 10:58:05,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 1440130602c150be12010ef49b642cd3, disabling compactions & flushes 2023-07-12 10:58:05,392 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. after waiting 0 ms 2023-07-12 10:58:05,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,392 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 1440130602c150be12010ef49b642cd3: 2023-07-12 10:58:05,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f0bb4bf8c41d01214285263b44c56fdf, disabling compactions & flushes 2023-07-12 10:58:05,401 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. after waiting 0 ms 2023-07-12 10:58:05,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,401 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f0bb4bf8c41d01214285263b44c56fdf: 2023-07-12 10:58:05,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159485405"}]},"ts":"1689159485405"} 2023-07-12 10:58:05,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159485405"}]},"ts":"1689159485405"} 2023-07-12 10:58:05,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159485405"}]},"ts":"1689159485405"} 2023-07-12 10:58:05,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159485405"}]},"ts":"1689159485405"} 2023-07-12 10:58:05,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159485405"}]},"ts":"1689159485405"} 2023-07-12 10:58:05,409 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 10:58:05,411 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159485411"}]},"ts":"1689159485411"} 2023-07-12 10:58:05,412 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 10:58:05,416 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:05,416 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:05,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:05,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:05,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, ASSIGN}] 2023-07-12 10:58:05,421 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, ASSIGN 2023-07-12 10:58:05,421 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, ASSIGN 2023-07-12 10:58:05,421 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, ASSIGN 2023-07-12 10:58:05,422 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, ASSIGN 2023-07-12 10:58:05,422 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, ASSIGN 2023-07-12 10:58:05,423 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:05,423 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:05,423 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:05,426 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37587,1689159476845; forceNewPlan=false, retain=false 2023-07-12 10:58:05,430 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:05,573 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 10:58:05,576 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=1440130602c150be12010ef49b642cd3, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:05,576 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=66ad69ea3e9dbe003d717b5c2042a6c4, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,576 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=d1f2fa8d6c1a4f9192f68614dabcd2e7, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,576 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=f0bb4bf8c41d01214285263b44c56fdf, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,577 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159485576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159485576"}]},"ts":"1689159485576"} 2023-07-12 10:58:05,577 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159485576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159485576"}]},"ts":"1689159485576"} 2023-07-12 10:58:05,577 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159485576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159485576"}]},"ts":"1689159485576"} 2023-07-12 10:58:05,577 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159485576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159485576"}]},"ts":"1689159485576"} 2023-07-12 10:58:05,577 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=6395ade763b737d33b8e61918c5b36d1, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:05,577 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485577"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159485577"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159485577"}]},"ts":"1689159485577"} 2023-07-12 10:58:05,582 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; OpenRegionProcedure d1f2fa8d6c1a4f9192f68614dabcd2e7, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:05,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure 66ad69ea3e9dbe003d717b5c2042a6c4, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:05,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=57, state=RUNNABLE; OpenRegionProcedure f0bb4bf8c41d01214285263b44c56fdf, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:05,587 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure 1440130602c150be12010ef49b642cd3, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:05,589 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=54, state=RUNNABLE; OpenRegionProcedure 6395ade763b737d33b8e61918c5b36d1, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:05,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 10:58:05,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d1f2fa8d6c1a4f9192f68614dabcd2e7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 10:58:05,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,744 INFO [StoreOpener-d1f2fa8d6c1a4f9192f68614dabcd2e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1440130602c150be12010ef49b642cd3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 10:58:05,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,746 DEBUG [StoreOpener-d1f2fa8d6c1a4f9192f68614dabcd2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/f 2023-07-12 10:58:05,746 DEBUG [StoreOpener-d1f2fa8d6c1a4f9192f68614dabcd2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/f 2023-07-12 10:58:05,747 INFO [StoreOpener-d1f2fa8d6c1a4f9192f68614dabcd2e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d1f2fa8d6c1a4f9192f68614dabcd2e7 columnFamilyName f 2023-07-12 10:58:05,748 INFO [StoreOpener-d1f2fa8d6c1a4f9192f68614dabcd2e7-1] regionserver.HStore(310): Store=d1f2fa8d6c1a4f9192f68614dabcd2e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,752 INFO [StoreOpener-1440130602c150be12010ef49b642cd3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,755 DEBUG [StoreOpener-1440130602c150be12010ef49b642cd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/f 2023-07-12 10:58:05,755 DEBUG [StoreOpener-1440130602c150be12010ef49b642cd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/f 2023-07-12 10:58:05,756 INFO [StoreOpener-1440130602c150be12010ef49b642cd3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1440130602c150be12010ef49b642cd3 columnFamilyName f 2023-07-12 10:58:05,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:05,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened d1f2fa8d6c1a4f9192f68614dabcd2e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11501421600, jitterRate=0.07115335762500763}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for d1f2fa8d6c1a4f9192f68614dabcd2e7: 2023-07-12 10:58:05,762 INFO [StoreOpener-1440130602c150be12010ef49b642cd3-1] regionserver.HStore(310): Store=1440130602c150be12010ef49b642cd3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7., pid=58, masterSystemTime=1689159485736 2023-07-12 10:58:05,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:05,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 66ad69ea3e9dbe003d717b5c2042a6c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 10:58:05,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,770 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=d1f2fa8d6c1a4f9192f68614dabcd2e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,770 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485769"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159485769"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159485769"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159485769"}]},"ts":"1689159485769"} 2023-07-12 10:58:05,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:05,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-12 10:58:05,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; OpenRegionProcedure d1f2fa8d6c1a4f9192f68614dabcd2e7, server=jenkins-hbase9.apache.org,37587,1689159476845 in 191 msec 2023-07-12 10:58:05,778 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, ASSIGN in 357 msec 2023-07-12 10:58:05,781 INFO [StoreOpener-66ad69ea3e9dbe003d717b5c2042a6c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,783 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1440130602c150be12010ef49b642cd3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11574077600, jitterRate=0.07791997492313385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1440130602c150be12010ef49b642cd3: 2023-07-12 10:58:05,784 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3., pid=61, masterSystemTime=1689159485740 2023-07-12 10:58:05,784 DEBUG [StoreOpener-66ad69ea3e9dbe003d717b5c2042a6c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/f 2023-07-12 10:58:05,784 DEBUG [StoreOpener-66ad69ea3e9dbe003d717b5c2042a6c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/f 2023-07-12 10:58:05,785 INFO [StoreOpener-66ad69ea3e9dbe003d717b5c2042a6c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 66ad69ea3e9dbe003d717b5c2042a6c4 columnFamilyName f 2023-07-12 10:58:05,786 INFO [StoreOpener-66ad69ea3e9dbe003d717b5c2042a6c4-1] regionserver.HStore(310): Store=66ad69ea3e9dbe003d717b5c2042a6c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:05,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6395ade763b737d33b8e61918c5b36d1, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 10:58:05,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,787 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=1440130602c150be12010ef49b642cd3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:05,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,788 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485787"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159485787"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159485787"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159485787"}]},"ts":"1689159485787"} 2023-07-12 10:58:05,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:05,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-12 10:58:05,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure 1440130602c150be12010ef49b642cd3, server=jenkins-hbase9.apache.org,34631,1689159480880 in 204 msec 2023-07-12 10:58:05,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, ASSIGN in 375 msec 2023-07-12 10:58:05,799 INFO [StoreOpener-6395ade763b737d33b8e61918c5b36d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,801 DEBUG [StoreOpener-6395ade763b737d33b8e61918c5b36d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/f 2023-07-12 10:58:05,802 DEBUG [StoreOpener-6395ade763b737d33b8e61918c5b36d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/f 2023-07-12 10:58:05,802 INFO [StoreOpener-6395ade763b737d33b8e61918c5b36d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6395ade763b737d33b8e61918c5b36d1 columnFamilyName f 2023-07-12 10:58:05,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 66ad69ea3e9dbe003d717b5c2042a6c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10674276320, jitterRate=-0.0058805495500564575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 66ad69ea3e9dbe003d717b5c2042a6c4: 2023-07-12 10:58:05,803 INFO [StoreOpener-6395ade763b737d33b8e61918c5b36d1-1] regionserver.HStore(310): Store=6395ade763b737d33b8e61918c5b36d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,807 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4., pid=59, masterSystemTime=1689159485736 2023-07-12 10:58:05,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:05,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:05,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f0bb4bf8c41d01214285263b44c56fdf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 10:58:05,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:05,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,812 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=66ad69ea3e9dbe003d717b5c2042a6c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,812 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485812"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159485812"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159485812"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159485812"}]},"ts":"1689159485812"} 2023-07-12 10:58:05,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,816 INFO [StoreOpener-f0bb4bf8c41d01214285263b44c56fdf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 6395ade763b737d33b8e61918c5b36d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11545918720, jitterRate=0.07529747486114502}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 6395ade763b737d33b8e61918c5b36d1: 2023-07-12 10:58:05,818 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1., pid=62, masterSystemTime=1689159485740 2023-07-12 10:58:05,818 DEBUG [StoreOpener-f0bb4bf8c41d01214285263b44c56fdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/f 2023-07-12 10:58:05,818 DEBUG [StoreOpener-f0bb4bf8c41d01214285263b44c56fdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/f 2023-07-12 10:58:05,819 INFO [StoreOpener-f0bb4bf8c41d01214285263b44c56fdf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f0bb4bf8c41d01214285263b44c56fdf columnFamilyName f 2023-07-12 10:58:05,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-12 10:58:05,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure 66ad69ea3e9dbe003d717b5c2042a6c4, server=jenkins-hbase9.apache.org,37587,1689159476845 in 232 msec 2023-07-12 10:58:05,820 INFO [StoreOpener-f0bb4bf8c41d01214285263b44c56fdf-1] regionserver.HStore(310): Store=f0bb4bf8c41d01214285263b44c56fdf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:05,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,820 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:05,821 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=6395ade763b737d33b8e61918c5b36d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:05,821 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159485821"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159485821"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159485821"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159485821"}]},"ts":"1689159485821"} 2023-07-12 10:58:05,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,822 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, ASSIGN in 402 msec 2023-07-12 10:58:05,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:05,826 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=54 2023-07-12 10:58:05,826 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=54, state=SUCCESS; OpenRegionProcedure 6395ade763b737d33b8e61918c5b36d1, server=jenkins-hbase9.apache.org,34631,1689159480880 in 234 msec 2023-07-12 10:58:05,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, ASSIGN in 407 msec 2023-07-12 10:58:05,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:05,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f0bb4bf8c41d01214285263b44c56fdf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10566188160, jitterRate=-0.015947043895721436}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:05,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f0bb4bf8c41d01214285263b44c56fdf: 2023-07-12 10:58:05,830 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf., pid=60, masterSystemTime=1689159485736 2023-07-12 10:58:05,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:05,833 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=f0bb4bf8c41d01214285263b44c56fdf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:05,833 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159485832"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159485832"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159485832"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159485832"}]},"ts":"1689159485832"} 2023-07-12 10:58:05,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=57 2023-07-12 10:58:05,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=57, state=SUCCESS; OpenRegionProcedure f0bb4bf8c41d01214285263b44c56fdf, server=jenkins-hbase9.apache.org,37587,1689159476845 in 250 msec 2023-07-12 10:58:05,840 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-12 10:58:05,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, ASSIGN in 418 msec 2023-07-12 10:58:05,841 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159485841"}]},"ts":"1689159485841"} 2023-07-12 10:58:05,842 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 10:58:05,846 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 10:58:05,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 781 msec 2023-07-12 10:58:06,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 10:58:06,183 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-12 10:58:06,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,187 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,196 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159486196"}]},"ts":"1689159486196"} 2023-07-12 10:58:06,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 10:58:06,202 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 10:58:06,204 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 10:58:06,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, UNASSIGN}] 2023-07-12 10:58:06,206 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, UNASSIGN 2023-07-12 10:58:06,209 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, UNASSIGN 2023-07-12 10:58:06,209 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, UNASSIGN 2023-07-12 10:58:06,210 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, UNASSIGN 2023-07-12 10:58:06,210 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, UNASSIGN 2023-07-12 10:58:06,214 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=1440130602c150be12010ef49b642cd3, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:06,214 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=66ad69ea3e9dbe003d717b5c2042a6c4, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:06,214 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=f0bb4bf8c41d01214285263b44c56fdf, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:06,214 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=d1f2fa8d6c1a4f9192f68614dabcd2e7, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:06,215 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159486214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159486214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159486214"}]},"ts":"1689159486214"} 2023-07-12 10:58:06,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159486214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159486214"}]},"ts":"1689159486214"} 2023-07-12 10:58:06,215 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159486214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159486214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159486214"}]},"ts":"1689159486214"} 2023-07-12 10:58:06,214 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159486214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159486214"}]},"ts":"1689159486214"} 2023-07-12 10:58:06,214 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=6395ade763b737d33b8e61918c5b36d1, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:06,215 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159486214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159486214"}]},"ts":"1689159486214"} 2023-07-12 10:58:06,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=66, state=RUNNABLE; CloseRegionProcedure d1f2fa8d6c1a4f9192f68614dabcd2e7, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:06,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=68, state=RUNNABLE; CloseRegionProcedure f0bb4bf8c41d01214285263b44c56fdf, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:06,221 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=64, state=RUNNABLE; CloseRegionProcedure 66ad69ea3e9dbe003d717b5c2042a6c4, server=jenkins-hbase9.apache.org,37587,1689159476845}] 2023-07-12 10:58:06,222 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; CloseRegionProcedure 1440130602c150be12010ef49b642cd3, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:06,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=65, state=RUNNABLE; CloseRegionProcedure 6395ade763b737d33b8e61918c5b36d1, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:06,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 10:58:06,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:06,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 66ad69ea3e9dbe003d717b5c2042a6c4, disabling compactions & flushes 2023-07-12 10:58:06,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:06,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:06,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. after waiting 0 ms 2023-07-12 10:58:06,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:06,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:06,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1440130602c150be12010ef49b642cd3, disabling compactions & flushes 2023-07-12 10:58:06,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:06,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:06,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. after waiting 0 ms 2023-07-12 10:58:06,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:06,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:06,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4. 2023-07-12 10:58:06,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 66ad69ea3e9dbe003d717b5c2042a6c4: 2023-07-12 10:58:06,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:06,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3. 2023-07-12 10:58:06,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1440130602c150be12010ef49b642cd3: 2023-07-12 10:58:06,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:06,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:06,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f0bb4bf8c41d01214285263b44c56fdf, disabling compactions & flushes 2023-07-12 10:58:06,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:06,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:06,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. after waiting 0 ms 2023-07-12 10:58:06,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:06,390 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=66ad69ea3e9dbe003d717b5c2042a6c4, regionState=CLOSED 2023-07-12 10:58:06,390 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159486390"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159486390"}]},"ts":"1689159486390"} 2023-07-12 10:58:06,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1440130602c150be12010ef49b642cd3 2023-07-12 10:58:06,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:06,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 6395ade763b737d33b8e61918c5b36d1, disabling compactions & flushes 2023-07-12 10:58:06,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:06,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:06,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. after waiting 0 ms 2023-07-12 10:58:06,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:06,402 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=1440130602c150be12010ef49b642cd3, regionState=CLOSED 2023-07-12 10:58:06,402 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159486402"}]},"ts":"1689159486402"} 2023-07-12 10:58:06,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:06,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf. 2023-07-12 10:58:06,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f0bb4bf8c41d01214285263b44c56fdf: 2023-07-12 10:58:06,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:06,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:06,410 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=64 2023-07-12 10:58:06,411 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=64, state=SUCCESS; CloseRegionProcedure 66ad69ea3e9dbe003d717b5c2042a6c4, server=jenkins-hbase9.apache.org,37587,1689159476845 in 174 msec 2023-07-12 10:58:06,411 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=f0bb4bf8c41d01214285263b44c56fdf, regionState=CLOSED 2023-07-12 10:58:06,411 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689159486411"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159486411"}]},"ts":"1689159486411"} 2023-07-12 10:58:06,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing d1f2fa8d6c1a4f9192f68614dabcd2e7, disabling compactions & flushes 2023-07-12 10:58:06,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:06,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:06,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. after waiting 0 ms 2023-07-12 10:58:06,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:06,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-12 10:58:06,417 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=66ad69ea3e9dbe003d717b5c2042a6c4, UNASSIGN in 206 msec 2023-07-12 10:58:06,417 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; CloseRegionProcedure 1440130602c150be12010ef49b642cd3, server=jenkins-hbase9.apache.org,34631,1689159480880 in 185 msec 2023-07-12 10:58:06,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:06,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1. 2023-07-12 10:58:06,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=68 2023-07-12 10:58:06,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 6395ade763b737d33b8e61918c5b36d1: 2023-07-12 10:58:06,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=68, state=SUCCESS; CloseRegionProcedure f0bb4bf8c41d01214285263b44c56fdf, server=jenkins-hbase9.apache.org,37587,1689159476845 in 194 msec 2023-07-12 10:58:06,418 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1440130602c150be12010ef49b642cd3, UNASSIGN in 212 msec 2023-07-12 10:58:06,420 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f0bb4bf8c41d01214285263b44c56fdf, UNASSIGN in 214 msec 2023-07-12 10:58:06,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:06,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:06,422 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=6395ade763b737d33b8e61918c5b36d1, regionState=CLOSED 2023-07-12 10:58:06,422 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486421"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159486421"}]},"ts":"1689159486421"} 2023-07-12 10:58:06,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7. 2023-07-12 10:58:06,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for d1f2fa8d6c1a4f9192f68614dabcd2e7: 2023-07-12 10:58:06,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:06,425 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=d1f2fa8d6c1a4f9192f68614dabcd2e7, regionState=CLOSED 2023-07-12 10:58:06,425 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689159486425"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159486425"}]},"ts":"1689159486425"} 2023-07-12 10:58:06,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=65 2023-07-12 10:58:06,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=65, state=SUCCESS; CloseRegionProcedure 6395ade763b737d33b8e61918c5b36d1, server=jenkins-hbase9.apache.org,34631,1689159480880 in 202 msec 2023-07-12 10:58:06,430 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6395ade763b737d33b8e61918c5b36d1, UNASSIGN in 223 msec 2023-07-12 10:58:06,431 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-12 10:58:06,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; CloseRegionProcedure d1f2fa8d6c1a4f9192f68614dabcd2e7, server=jenkins-hbase9.apache.org,37587,1689159476845 in 211 msec 2023-07-12 10:58:06,433 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-12 10:58:06,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1f2fa8d6c1a4f9192f68614dabcd2e7, UNASSIGN in 227 msec 2023-07-12 10:58:06,434 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159486434"}]},"ts":"1689159486434"} 2023-07-12 10:58:06,436 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 10:58:06,438 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 10:58:06,440 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 251 msec 2023-07-12 10:58:06,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 10:58:06,502 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-12 10:58:06,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,519 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1238877202' 2023-07-12 10:58:06,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:06,526 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 10:58:06,556 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:06,556 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:06,556 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:06,559 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/recovered.edits] 2023-07-12 10:58:06,559 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/recovered.edits] 2023-07-12 10:58:06,560 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/recovered.edits] 2023-07-12 10:58:06,561 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:06,562 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:06,567 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/recovered.edits] 2023-07-12 10:58:06,568 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/recovered.edits] 2023-07-12 10:58:06,583 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7/recovered.edits/4.seqid 2023-07-12 10:58:06,584 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1f2fa8d6c1a4f9192f68614dabcd2e7 2023-07-12 10:58:06,586 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1/recovered.edits/4.seqid 2023-07-12 10:58:06,586 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf/recovered.edits/4.seqid 2023-07-12 10:58:06,587 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4/recovered.edits/4.seqid 2023-07-12 10:58:06,588 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f0bb4bf8c41d01214285263b44c56fdf 2023-07-12 10:58:06,589 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3/recovered.edits/4.seqid 2023-07-12 10:58:06,589 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/66ad69ea3e9dbe003d717b5c2042a6c4 2023-07-12 10:58:06,589 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6395ade763b737d33b8e61918c5b36d1 2023-07-12 10:58:06,590 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1440130602c150be12010ef49b642cd3 2023-07-12 10:58:06,590 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 10:58:06,593 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,600 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 10:58:06,604 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159486607"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159486607"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159486607"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689159485124.1440130602c150be12010ef49b642cd3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159486607"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,607 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159486607"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,610 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 10:58:06,610 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 66ad69ea3e9dbe003d717b5c2042a6c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689159485124.66ad69ea3e9dbe003d717b5c2042a6c4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6395ade763b737d33b8e61918c5b36d1, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689159485124.6395ade763b737d33b8e61918c5b36d1.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => d1f2fa8d6c1a4f9192f68614dabcd2e7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689159485124.d1f2fa8d6c1a4f9192f68614dabcd2e7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1440130602c150be12010ef49b642cd3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689159485124.1440130602c150be12010ef49b642cd3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f0bb4bf8c41d01214285263b44c56fdf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689159485124.f0bb4bf8c41d01214285263b44c56fdf.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 10:58:06,610 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 10:58:06,610 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159486610"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:06,623 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 10:58:06,626 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 10:58:06,629 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 118 msec 2023-07-12 10:58:06,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 10:58:06,643 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-12 10:58:06,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:06,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1238877202, current retry=0 2023-07-12 10:58:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1238877202 => default 2023-07-12 10:58:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testTableMoveTruncateAndDrop_1238877202 2023-07-12 10:58:06,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:06,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:06,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:06,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:06,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,721 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:06,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:06,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:06,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:06,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:06,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160686744, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:06,745 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:06,747 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:06,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,749 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:06,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:06,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,775 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=496 (was 426) Potentially hanging thread: PacketResponder: BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:34631Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52490@0x54244694 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1763783791_17 at /127.0.0.1:54208 [Receiving block BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:34631-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52490@0x54244694-SendThread(127.0.0.1:52490) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1106068443-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1763783791_17 at /127.0.0.1:32812 [Receiving block BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1106068443-641-acceptor-0@383df43e-ServerConnector@1c038137{HTTP/1.1, (http/1.1)}{0.0.0.0:44411} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1763783791_17 at /127.0.0.1:53080 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1763783791_17 at /127.0.0.1:53064 [Receiving block BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:34631 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x5310854c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1a482f9e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:36583 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-372595763-172.31.2.10-1689159471177:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36583 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52490@0x54244694-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1763783791_17 at /127.0.0.1:32894 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-640 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4-prefix:jenkins-hbase9.apache.org,34631,1689159480880 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x5310854c-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1106068443-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 686) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=328 (was 321) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=6367 (was 6841) 2023-07-12 10:58:06,792 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=328, ProcessCount=172, AvailableMemoryMB=6365 2023-07-12 10:58:06,792 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 10:58:06,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:06,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:06,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:06,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,809 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:06,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:06,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:06,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:06,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:06,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160686821, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:06,822 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:06,824 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:06,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,825 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:06,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:06,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,828 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo* 2023-07-12 10:58:06,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.2.10:34382 deadline: 1689160686827, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 10:58:06,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo@ 2023-07-12 10:58:06,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.2.10:34382 deadline: 1689160686829, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 10:58:06,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup - 2023-07-12 10:58:06,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.2.10:34382 deadline: 1689160686831, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 10:58:06,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo_123 2023-07-12 10:58:06,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 10:58:06,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:06,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:06,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:06,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup foo_123 2023-07-12 10:58:06,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:06,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:06,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:06,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:06,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,897 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:06,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:06,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:06,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:06,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:06,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:06,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:06,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160686914, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:06,915 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:06,917 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:06,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,919 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:06,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:06,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:06,944 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=499 (was 496) Potentially hanging thread: hconnection-0x2be10837-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 781), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=328 (was 328), ProcessCount=172 (was 172), AvailableMemoryMB=6352 (was 6365) 2023-07-12 10:58:06,969 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=499, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=328, ProcessCount=172, AvailableMemoryMB=6348 2023-07-12 10:58:06,969 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 10:58:06,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:06,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:06,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:06,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:06,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:06,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:06,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:06,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:06,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:06,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:06,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:06,996 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:06,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:07,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:07,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:07,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:07,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:07,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:07,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:07,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:07,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:07,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160687011, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:07,012 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:07,014 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:07,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:07,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:07,016 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:07,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:07,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:07,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:07,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:07,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:07,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:07,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup bar 2023-07-12 10:58:07,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:07,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:07,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:07,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:07,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:07,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:07,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:07,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup bar 2023-07-12 10:58:07,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:07,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:07,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:07,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:07,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(238): Moving server region 15c37347446d32713d4f7b66d1f2d533, which do not belong to RSGroup bar 2023-07-12 10:58:07,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE 2023-07-12 10:58:07,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 10:58:07,041 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE 2023-07-12 10:58:07,041 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:07,042 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487041"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159487041"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159487041"}]},"ts":"1689159487041"} 2023-07-12 10:58:07,043 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:07,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 15c37347446d32713d4f7b66d1f2d533, disabling compactions & flushes 2023-07-12 10:58:07,198 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. after waiting 0 ms 2023-07-12 10:58:07,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,198 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 15c37347446d32713d4f7b66d1f2d533 1/1 column families, dataSize=4.99 KB heapSize=8.40 KB 2023-07-12 10:58:07,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.99 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,233 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/8336f932b02e405fa39fe4dbddf71df5 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/8336f932b02e405fa39fe4dbddf71df5, entries=9, sequenceid=32, filesize=5.5 K 2023-07-12 10:58:07,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.99 KB/5109, heapSize ~8.38 KB/8584, currentSize=0 B/0 for 15c37347446d32713d4f7b66d1f2d533 in 48ms, sequenceid=32, compaction requested=false 2023-07-12 10:58:07,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-12 10:58:07,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:07,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:07,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 15c37347446d32713d4f7b66d1f2d533 move to jenkins-hbase9.apache.org,46327,1689159476604 record at close sequenceid=32 2023-07-12 10:58:07,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,263 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=CLOSED 2023-07-12 10:58:07,264 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487263"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159487263"}]},"ts":"1689159487263"} 2023-07-12 10:58:07,268 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-12 10:58:07,268 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,39937,1689159477046 in 223 msec 2023-07-12 10:58:07,269 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:07,420 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:07,420 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159487420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159487420"}]},"ts":"1689159487420"} 2023-07-12 10:58:07,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:07,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15c37347446d32713d4f7b66d1f2d533, NAME => 'hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:07,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:07,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. service=MultiRowMutationService 2023-07-12 10:58:07,585 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:07,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:07,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,587 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,589 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:07,589 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m 2023-07-12 10:58:07,589 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15c37347446d32713d4f7b66d1f2d533 columnFamilyName m 2023-07-12 10:58:07,610 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,610 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(539): loaded hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/8336f932b02e405fa39fe4dbddf71df5 2023-07-12 10:58:07,623 DEBUG [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(539): loaded hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/fa03ad3aba974bcb876fb4ea7607a1c9 2023-07-12 10:58:07,623 INFO [StoreOpener-15c37347446d32713d4f7b66d1f2d533-1] regionserver.HStore(310): Store=15c37347446d32713d4f7b66d1f2d533/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:07,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:07,631 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 15c37347446d32713d4f7b66d1f2d533; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@72ed58bf, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:07,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:07,633 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533., pid=77, masterSystemTime=1689159487578 2023-07-12 10:58:07,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,634 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:07,635 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=15c37347446d32713d4f7b66d1f2d533, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:07,635 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159487635"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159487635"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159487635"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159487635"}]},"ts":"1689159487635"} 2023-07-12 10:58:07,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-12 10:58:07,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 15c37347446d32713d4f7b66d1f2d533, server=jenkins-hbase9.apache.org,46327,1689159476604 in 215 msec 2023-07-12 10:58:07,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=15c37347446d32713d4f7b66d1f2d533, REOPEN/MOVE in 602 msec 2023-07-12 10:58:08,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-12 10:58:08,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845, jenkins-hbase9.apache.org,39937,1689159477046] are moved back to default 2023-07-12 10:58:08,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 10:58:08,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:08,042 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39937] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:60682 deadline: 1689159548042, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=46327 startCode=1689159476604. As of locationSeqNum=32. 2023-07-12 10:58:08,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:08,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:08,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bar 2023-07-12 10:58:08,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:08,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:08,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:08,164 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:08,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-12 10:58:08,165 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39937] ipc.CallRunner(144): callId: 188 service: ClientService methodName: ExecService size: 528 connection: 172.31.2.10:60690 deadline: 1689159548164, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=46327 startCode=1689159476604. As of locationSeqNum=32. 2023-07-12 10:58:08,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 10:58:08,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 10:58:08,269 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,269 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:08,270 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:08,270 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:08,272 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:08,274 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,275 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 empty. 2023-07-12 10:58:08,275 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,275 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 10:58:08,293 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:08,294 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9d40145ce88747e3e848ca87a6867322, NAME => 'Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:08,312 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:08,312 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 9d40145ce88747e3e848ca87a6867322, disabling compactions & flushes 2023-07-12 10:58:08,312 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,313 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,313 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. after waiting 0 ms 2023-07-12 10:58:08,313 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,313 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,313 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:08,315 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:08,316 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159488316"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159488316"}]},"ts":"1689159488316"} 2023-07-12 10:58:08,318 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:08,319 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:08,319 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159488319"}]},"ts":"1689159488319"} 2023-07-12 10:58:08,320 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 10:58:08,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, ASSIGN}] 2023-07-12 10:58:08,327 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, ASSIGN 2023-07-12 10:58:08,328 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:08,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 10:58:08,480 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:08,480 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159488480"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159488480"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159488480"}]},"ts":"1689159488480"} 2023-07-12 10:58:08,482 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:08,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d40145ce88747e3e848ca87a6867322, NAME => 'Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:08,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:08,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,639 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,641 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:08,641 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:08,642 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d40145ce88747e3e848ca87a6867322 columnFamilyName f 2023-07-12 10:58:08,643 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(310): Store=9d40145ce88747e3e848ca87a6867322/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:08,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:08,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 9d40145ce88747e3e848ca87a6867322; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11451584480, jitterRate=0.06651191413402557}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:08,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:08,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322., pid=80, masterSystemTime=1689159488633 2023-07-12 10:58:08,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,664 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:08,664 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159488664"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159488664"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159488664"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159488664"}]},"ts":"1689159488664"} 2023-07-12 10:58:08,668 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-12 10:58:08,668 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604 in 184 msec 2023-07-12 10:58:08,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-12 10:58:08,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, ASSIGN in 342 msec 2023-07-12 10:58:08,671 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:08,671 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159488671"}]},"ts":"1689159488671"} 2023-07-12 10:58:08,672 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 10:58:08,675 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:08,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 514 msec 2023-07-12 10:58:08,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 10:58:08,770 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-12 10:58:08,770 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 10:58:08,770 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:08,775 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 10:58:08,775 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:08,776 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 10:58:08,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 10:58:08,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:08,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:08,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:08,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:08,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 10:58:08,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 9d40145ce88747e3e848ca87a6867322 to RSGroup bar 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:08,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE 2023-07-12 10:58:08,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 10:58:08,789 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE 2023-07-12 10:58:08,790 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:08,790 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159488790"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159488790"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159488790"}]},"ts":"1689159488790"} 2023-07-12 10:58:08,791 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:08,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 9d40145ce88747e3e848ca87a6867322, disabling compactions & flushes 2023-07-12 10:58:08,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. after waiting 0 ms 2023-07-12 10:58:08,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:08,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:08,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:08,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 9d40145ce88747e3e848ca87a6867322 move to jenkins-hbase9.apache.org,39937,1689159477046 record at close sequenceid=2 2023-07-12 10:58:08,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:08,955 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSED 2023-07-12 10:58:08,955 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159488955"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159488955"}]},"ts":"1689159488955"} 2023-07-12 10:58:08,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-12 10:58:08,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604 in 166 msec 2023-07-12 10:58:08,960 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:09,111 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:09,111 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:09,111 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159489111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159489111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159489111"}]},"ts":"1689159489111"} 2023-07-12 10:58:09,114 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:09,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d40145ce88747e3e848ca87a6867322, NAME => 'Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:09,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:09,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,284 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,285 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:09,285 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:09,285 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d40145ce88747e3e848ca87a6867322 columnFamilyName f 2023-07-12 10:58:09,286 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(310): Store=9d40145ce88747e3e848ca87a6867322/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:09,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 9d40145ce88747e3e848ca87a6867322; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9469905440, jitterRate=-0.11804632842540741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:09,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:09,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322., pid=83, masterSystemTime=1689159489277 2023-07-12 10:58:09,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,296 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:09,296 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159489295"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159489295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159489295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159489295"}]},"ts":"1689159489295"} 2023-07-12 10:58:09,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-12 10:58:09,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,39937,1689159477046 in 183 msec 2023-07-12 10:58:09,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE in 512 msec 2023-07-12 10:58:09,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-12 10:58:09,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 10:58:09,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:09,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:09,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:09,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bar 2023-07-12 10:58:09,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:09,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-12 10:58:09,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:09,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.2.10:34382 deadline: 1689160689798, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 10:58:09,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:09,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:09,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.2.10:34382 deadline: 1689160689799, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 10:58:09,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 10:58:09,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:09,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:09,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:09,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:09,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 10:58:09,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 9d40145ce88747e3e848ca87a6867322 to RSGroup default 2023-07-12 10:58:09,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE 2023-07-12 10:58:09,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 10:58:09,812 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE 2023-07-12 10:58:09,816 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:09,817 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159489816"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159489816"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159489816"}]},"ts":"1689159489816"} 2023-07-12 10:58:09,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:09,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 9d40145ce88747e3e848ca87a6867322, disabling compactions & flushes 2023-07-12 10:58:09,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. after waiting 0 ms 2023-07-12 10:58:09,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:09,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:09,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:09,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 9d40145ce88747e3e848ca87a6867322 move to jenkins-hbase9.apache.org,46327,1689159476604 record at close sequenceid=5 2023-07-12 10:58:09,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:09,980 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSED 2023-07-12 10:58:09,981 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159489980"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159489980"}]},"ts":"1689159489980"} 2023-07-12 10:58:09,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 10:58:09,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,39937,1689159477046 in 164 msec 2023-07-12 10:58:09,985 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:10,135 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:10,136 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159490135"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159490135"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159490135"}]},"ts":"1689159490135"} 2023-07-12 10:58:10,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:10,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:10,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d40145ce88747e3e848ca87a6867322, NAME => 'Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,295 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,296 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:10,297 DEBUG [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f 2023-07-12 10:58:10,297 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d40145ce88747e3e848ca87a6867322 columnFamilyName f 2023-07-12 10:58:10,298 INFO [StoreOpener-9d40145ce88747e3e848ca87a6867322-1] regionserver.HStore(310): Store=9d40145ce88747e3e848ca87a6867322/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:10,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:10,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 9d40145ce88747e3e848ca87a6867322; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12068368960, jitterRate=0.12395444512367249}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:10,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:10,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322., pid=86, masterSystemTime=1689159490289 2023-07-12 10:58:10,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:10,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:10,305 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:10,305 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159490305"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159490305"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159490305"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159490305"}]},"ts":"1689159490305"} 2023-07-12 10:58:10,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-12 10:58:10,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604 in 170 msec 2023-07-12 10:58:10,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, REOPEN/MOVE in 498 msec 2023-07-12 10:58:10,446 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:10,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-12 10:58:10,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 10:58:10,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:10,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:10,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:10,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-12 10:58:10,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:10,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.2.10:34382 deadline: 1689160690819, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 10:58:10,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:10,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:10,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 10:58:10,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:10,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:10,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 10:58:10,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845, jenkins-hbase9.apache.org,39937,1689159477046] are moved back to bar 2023-07-12 10:58:10,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 10:58:10,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:10,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:10,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:10,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-12 10:58:10,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:10,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:10,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:10,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:10,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:10,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:10,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:10,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:10,850 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 10:58:10,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testFailRemoveGroup 2023-07-12 10:58:10,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:10,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 10:58:10,855 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159490855"}]},"ts":"1689159490855"} 2023-07-12 10:58:10,856 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 10:58:10,862 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 10:58:10,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, UNASSIGN}] 2023-07-12 10:58:10,865 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, UNASSIGN 2023-07-12 10:58:10,865 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:10,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159490865"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159490865"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159490865"}]},"ts":"1689159490865"} 2023-07-12 10:58:10,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:10,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 10:58:11,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 9d40145ce88747e3e848ca87a6867322, disabling compactions & flushes 2023-07-12 10:58:11,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. after waiting 0 ms 2023-07-12 10:58:11,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:11,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 10:58:11,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322. 2023-07-12 10:58:11,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 9d40145ce88747e3e848ca87a6867322: 2023-07-12 10:58:11,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:11,036 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=9d40145ce88747e3e848ca87a6867322, regionState=CLOSED 2023-07-12 10:58:11,036 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689159491036"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159491036"}]},"ts":"1689159491036"} 2023-07-12 10:58:11,041 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-12 10:58:11,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure 9d40145ce88747e3e848ca87a6867322, server=jenkins-hbase9.apache.org,46327,1689159476604 in 171 msec 2023-07-12 10:58:11,044 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 10:58:11,044 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=9d40145ce88747e3e848ca87a6867322, UNASSIGN in 178 msec 2023-07-12 10:58:11,044 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159491044"}]},"ts":"1689159491044"} 2023-07-12 10:58:11,046 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 10:58:11,052 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 10:58:11,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 202 msec 2023-07-12 10:58:11,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 10:58:11,157 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-12 10:58:11,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testFailRemoveGroup 2023-07-12 10:58:11,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,161 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 10:58:11,161 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,165 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:11,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 10:58:11,168 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits] 2023-07-12 10:58:11,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/10.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322/recovered.edits/10.seqid 2023-07-12 10:58:11,174 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testFailRemoveGroup/9d40145ce88747e3e848ca87a6867322 2023-07-12 10:58:11,174 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 10:58:11,176 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,179 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 10:58:11,181 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 10:58:11,183 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,183 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 10:58:11,183 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159491183"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:11,185 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:11,185 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9d40145ce88747e3e848ca87a6867322, NAME => 'Group_testFailRemoveGroup,,1689159488161.9d40145ce88747e3e848ca87a6867322.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:11,185 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 10:58:11,185 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159491185"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:11,187 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 10:58:11,189 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 10:58:11,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 31 msec 2023-07-12 10:58:11,232 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:11,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 10:58:11,268 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-12 10:58:11,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:11,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:11,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:11,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:11,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:11,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:11,284 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:11,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:11,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:11,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:11,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:11,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160691300, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:11,301 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:11,303 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:11,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,304 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:11,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:11,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,323 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=505 (was 499) Potentially hanging thread: hconnection-0x5310854c-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_170865298_17 at /127.0.0.1:53212 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_170865298_17 at /127.0.0.1:53080 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-83128980_17 at /127.0.0.1:32894 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495442723_17 at /127.0.0.1:54340 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_170865298_17 at /127.0.0.1:54336 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x41a50c31-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=786 (was 781) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 328) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=6176 (was 6348) 2023-07-12 10:58:11,323 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 10:58:11,339 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=505, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=172, AvailableMemoryMB=6176 2023-07-12 10:58:11,339 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 10:58:11,340 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 10:58:11,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:11,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:11,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:11,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:11,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:11,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:11,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:11,357 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:11,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:11,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:11,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:11,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:11,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:11,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160691368, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:11,369 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:11,372 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:11,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,374 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:11,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:11,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:11,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testMultiTableMove_809721058 2023-07-12 10:58:11,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:11,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:11,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:11,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:34631] to rsgroup Group_testMultiTableMove_809721058 2023-07-12 10:58:11,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:11,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:11,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:11,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880] are moved back to default 2023-07-12 10:58:11,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_809721058 2023-07-12 10:58:11,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:11,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:11,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:11,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testMultiTableMove_809721058 2023-07-12 10:58:11,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:11,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:11,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:11,402 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:11,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-12 10:58:11,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:11,404 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:11,405 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:11,405 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:11,406 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:11,408 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:11,409 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,410 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 empty. 2023-07-12 10:58:11,410 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,410 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 10:58:11,432 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:11,433 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => c26ede90038d3ea0a8e7e04637fbc8c3, NAME => 'GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing c26ede90038d3ea0a8e7e04637fbc8c3, disabling compactions & flushes 2023-07-12 10:58:11,462 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. after waiting 0 ms 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,462 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,462 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for c26ede90038d3ea0a8e7e04637fbc8c3: 2023-07-12 10:58:11,466 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:11,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159491474"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159491474"}]},"ts":"1689159491474"} 2023-07-12 10:58:11,476 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:11,478 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:11,478 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159491478"}]},"ts":"1689159491478"} 2023-07-12 10:58:11,480 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 10:58:11,487 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:11,487 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:11,487 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:11,487 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:11,487 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:11,487 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, ASSIGN}] 2023-07-12 10:58:11,490 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, ASSIGN 2023-07-12 10:58:11,491 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:11,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:11,642 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:11,644 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:11,644 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159491643"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159491643"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159491643"}]},"ts":"1689159491643"} 2023-07-12 10:58:11,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:11,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:11,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c26ede90038d3ea0a8e7e04637fbc8c3, NAME => 'GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:11,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:11,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,812 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,814 DEBUG [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/f 2023-07-12 10:58:11,814 DEBUG [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/f 2023-07-12 10:58:11,814 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c26ede90038d3ea0a8e7e04637fbc8c3 columnFamilyName f 2023-07-12 10:58:11,815 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] regionserver.HStore(310): Store=c26ede90038d3ea0a8e7e04637fbc8c3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:11,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:11,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:11,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened c26ede90038d3ea0a8e7e04637fbc8c3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11709660800, jitterRate=0.09054714441299438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:11,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for c26ede90038d3ea0a8e7e04637fbc8c3: 2023-07-12 10:58:11,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3., pid=93, masterSystemTime=1689159491805 2023-07-12 10:58:11,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,830 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:11,830 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:11,830 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159491830"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159491830"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159491830"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159491830"}]},"ts":"1689159491830"} 2023-07-12 10:58:11,834 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-12 10:58:11,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,46327,1689159476604 in 186 msec 2023-07-12 10:58:11,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-12 10:58:11,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, ASSIGN in 348 msec 2023-07-12 10:58:11,838 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:11,838 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159491838"}]},"ts":"1689159491838"} 2023-07-12 10:58:11,839 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 10:58:11,845 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:11,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 445 msec 2023-07-12 10:58:12,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 10:58:12,008 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-12 10:58:12,008 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 10:58:12,009 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,012 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 10:58:12,012 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,012 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 10:58:12,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:12,016 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:12,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-12 10:58:12,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:12,019 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:12,019 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:12,020 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:12,020 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:12,022 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:12,024 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,024 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 empty. 2023-07-12 10:58:12,025 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,025 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 10:58:12,041 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:12,042 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 56108a624cdfec13e9ff4e567c057307, NAME => 'GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:12,058 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:12,059 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 56108a624cdfec13e9ff4e567c057307, disabling compactions & flushes 2023-07-12 10:58:12,059 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,059 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,059 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. after waiting 0 ms 2023-07-12 10:58:12,059 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,059 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,059 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 56108a624cdfec13e9ff4e567c057307: 2023-07-12 10:58:12,062 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:12,063 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159492063"}]},"ts":"1689159492063"} 2023-07-12 10:58:12,065 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:12,066 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:12,066 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159492066"}]},"ts":"1689159492066"} 2023-07-12 10:58:12,067 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 10:58:12,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:12,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:12,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:12,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:12,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:12,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, ASSIGN}] 2023-07-12 10:58:12,073 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, ASSIGN 2023-07-12 10:58:12,074 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:12,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:12,224 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:12,226 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:12,226 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492226"}]},"ts":"1689159492226"} 2023-07-12 10:58:12,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:12,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:12,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 56108a624cdfec13e9ff4e567c057307, NAME => 'GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:12,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:12,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,386 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,387 DEBUG [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/f 2023-07-12 10:58:12,387 DEBUG [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/f 2023-07-12 10:58:12,388 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 56108a624cdfec13e9ff4e567c057307 columnFamilyName f 2023-07-12 10:58:12,388 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] regionserver.HStore(310): Store=56108a624cdfec13e9ff4e567c057307/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:12,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:12,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 56108a624cdfec13e9ff4e567c057307; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9764846080, jitterRate=-0.09057784080505371}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:12,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 56108a624cdfec13e9ff4e567c057307: 2023-07-12 10:58:12,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307., pid=96, masterSystemTime=1689159492380 2023-07-12 10:58:12,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,398 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:12,398 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492398"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159492398"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159492398"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159492398"}]},"ts":"1689159492398"} 2023-07-12 10:58:12,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-12 10:58:12,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,46327,1689159476604 in 171 msec 2023-07-12 10:58:12,402 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 10:58:12,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, ASSIGN in 329 msec 2023-07-12 10:58:12,403 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:12,403 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159492403"}]},"ts":"1689159492403"} 2023-07-12 10:58:12,404 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 10:58:12,407 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:12,408 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 393 msec 2023-07-12 10:58:12,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 10:58:12,621 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-12 10:58:12,621 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 10:58:12,622 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,625 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 10:58:12,625 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,625 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 10:58:12,626 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:12,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 10:58:12,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:12,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 10:58:12,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:12,638 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_809721058 2023-07-12 10:58:12,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_809721058 2023-07-12 10:58:12,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:12,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:12,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:12,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:12,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_809721058 2023-07-12 10:58:12,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 56108a624cdfec13e9ff4e567c057307 to RSGroup Group_testMultiTableMove_809721058 2023-07-12 10:58:12,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, REOPEN/MOVE 2023-07-12 10:58:12,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_809721058 2023-07-12 10:58:12,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region c26ede90038d3ea0a8e7e04637fbc8c3 to RSGroup Group_testMultiTableMove_809721058 2023-07-12 10:58:12,652 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, REOPEN/MOVE 2023-07-12 10:58:12,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, REOPEN/MOVE 2023-07-12 10:58:12,654 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:12,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_809721058, current retry=0 2023-07-12 10:58:12,655 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, REOPEN/MOVE 2023-07-12 10:58:12,655 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492654"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492654"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492654"}]},"ts":"1689159492654"} 2023-07-12 10:58:12,656 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:12,656 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492655"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492655"}]},"ts":"1689159492655"} 2023-07-12 10:58:12,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:12,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:12,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 56108a624cdfec13e9ff4e567c057307, disabling compactions & flushes 2023-07-12 10:58:12,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. after waiting 0 ms 2023-07-12 10:58:12,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:12,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:12,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 56108a624cdfec13e9ff4e567c057307: 2023-07-12 10:58:12,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 56108a624cdfec13e9ff4e567c057307 move to jenkins-hbase9.apache.org,34631,1689159480880 record at close sequenceid=2 2023-07-12 10:58:12,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:12,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:12,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing c26ede90038d3ea0a8e7e04637fbc8c3, disabling compactions & flushes 2023-07-12 10:58:12,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:12,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:12,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. after waiting 0 ms 2023-07-12 10:58:12,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:12,822 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=CLOSED 2023-07-12 10:58:12,822 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492822"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159492822"}]},"ts":"1689159492822"} 2023-07-12 10:58:12,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:12,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:12,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for c26ede90038d3ea0a8e7e04637fbc8c3: 2023-07-12 10:58:12,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding c26ede90038d3ea0a8e7e04637fbc8c3 move to jenkins-hbase9.apache.org,34631,1689159480880 record at close sequenceid=2 2023-07-12 10:58:12,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-12 10:58:12,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,46327,1689159476604 in 167 msec 2023-07-12 10:58:12,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:12,831 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:12,831 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=CLOSED 2023-07-12 10:58:12,831 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492831"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159492831"}]},"ts":"1689159492831"} 2023-07-12 10:58:12,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-12 10:58:12,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,46327,1689159476604 in 173 msec 2023-07-12 10:58:12,836 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:12,981 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:12,981 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:12,982 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492981"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492981"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492981"}]},"ts":"1689159492981"} 2023-07-12 10:58:12,982 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159492981"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159492981"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159492981"}]},"ts":"1689159492981"} 2023-07-12 10:58:12,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=98, state=RUNNABLE; OpenRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:12,984 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=97, state=RUNNABLE; OpenRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:13,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c26ede90038d3ea0a8e7e04637fbc8c3, NAME => 'GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:13,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,142 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,143 DEBUG [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/f 2023-07-12 10:58:13,143 DEBUG [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/f 2023-07-12 10:58:13,143 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c26ede90038d3ea0a8e7e04637fbc8c3 columnFamilyName f 2023-07-12 10:58:13,144 INFO [StoreOpener-c26ede90038d3ea0a8e7e04637fbc8c3-1] regionserver.HStore(310): Store=c26ede90038d3ea0a8e7e04637fbc8c3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:13,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened c26ede90038d3ea0a8e7e04637fbc8c3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11574712000, jitterRate=0.07797905802726746}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:13,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for c26ede90038d3ea0a8e7e04637fbc8c3: 2023-07-12 10:58:13,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3., pid=101, masterSystemTime=1689159493135 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 56108a624cdfec13e9ff4e567c057307, NAME => 'GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:13,153 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159493153"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159493153"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159493153"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159493153"}]},"ts":"1689159493153"} 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,155 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,156 DEBUG [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/f 2023-07-12 10:58:13,156 DEBUG [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/f 2023-07-12 10:58:13,157 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 56108a624cdfec13e9ff4e567c057307 columnFamilyName f 2023-07-12 10:58:13,157 INFO [StoreOpener-56108a624cdfec13e9ff4e567c057307-1] regionserver.HStore(310): Store=56108a624cdfec13e9ff4e567c057307/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:13,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:13,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=98 2023-07-12 10:58:13,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=98, state=SUCCESS; OpenRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,34631,1689159480880 in 172 msec 2023-07-12 10:58:13,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 56108a624cdfec13e9ff4e567c057307; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9772340000, jitterRate=-0.08987991511821747}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:13,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 56108a624cdfec13e9ff4e567c057307: 2023-07-12 10:58:13,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307., pid=102, masterSystemTime=1689159493135 2023-07-12 10:58:13,165 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, REOPEN/MOVE in 512 msec 2023-07-12 10:58:13,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:13,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:13,167 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:13,167 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159493167"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159493167"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159493167"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159493167"}]},"ts":"1689159493167"} 2023-07-12 10:58:13,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=97 2023-07-12 10:58:13,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=97, state=SUCCESS; OpenRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,34631,1689159480880 in 185 msec 2023-07-12 10:58:13,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, REOPEN/MOVE in 521 msec 2023-07-12 10:58:13,194 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-12 10:58:13,194 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-12 10:58:13,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-12 10:58:13,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_809721058. 2023-07-12 10:58:13,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:13,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:13,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:13,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 10:58:13,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:13,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 10:58:13,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:13,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:13,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:13,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testMultiTableMove_809721058 2023-07-12 10:58:13,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:13,667 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 10:58:13,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable GrouptestMultiTableMoveA 2023-07-12 10:58:13,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:13,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:13,682 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159493682"}]},"ts":"1689159493682"} 2023-07-12 10:58:13,683 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 10:58:13,685 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 10:58:13,686 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, UNASSIGN}] 2023-07-12 10:58:13,687 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, UNASSIGN 2023-07-12 10:58:13,688 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:13,688 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159493688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159493688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159493688"}]},"ts":"1689159493688"} 2023-07-12 10:58:13,690 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:13,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:13,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing c26ede90038d3ea0a8e7e04637fbc8c3, disabling compactions & flushes 2023-07-12 10:58:13,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. after waiting 0 ms 2023-07-12 10:58:13,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:13,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3. 2023-07-12 10:58:13,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for c26ede90038d3ea0a8e7e04637fbc8c3: 2023-07-12 10:58:13,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,851 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=c26ede90038d3ea0a8e7e04637fbc8c3, regionState=CLOSED 2023-07-12 10:58:13,851 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159493851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159493851"}]},"ts":"1689159493851"} 2023-07-12 10:58:13,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-12 10:58:13,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure c26ede90038d3ea0a8e7e04637fbc8c3, server=jenkins-hbase9.apache.org,34631,1689159480880 in 163 msec 2023-07-12 10:58:13,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-12 10:58:13,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c26ede90038d3ea0a8e7e04637fbc8c3, UNASSIGN in 170 msec 2023-07-12 10:58:13,857 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159493857"}]},"ts":"1689159493857"} 2023-07-12 10:58:13,859 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 10:58:13,860 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 10:58:13,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 184 msec 2023-07-12 10:58:13,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 10:58:13,984 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-12 10:58:13,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete GrouptestMultiTableMoveA 2023-07-12 10:58:13,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:13,987 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:13,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_809721058' 2023-07-12 10:58:13,988 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:13,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:13,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:13,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:13,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:13,992 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:13,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 10:58:13,994 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits] 2023-07-12 10:58:14,000 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3/recovered.edits/7.seqid 2023-07-12 10:58:14,000 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveA/c26ede90038d3ea0a8e7e04637fbc8c3 2023-07-12 10:58:14,000 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 10:58:14,003 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:14,005 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 10:58:14,006 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 10:58:14,007 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:14,007 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 10:58:14,007 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159494007"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:14,009 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:14,009 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c26ede90038d3ea0a8e7e04637fbc8c3, NAME => 'GrouptestMultiTableMoveA,,1689159491400.c26ede90038d3ea0a8e7e04637fbc8c3.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:14,009 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 10:58:14,009 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159494009"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:14,010 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 10:58:14,013 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 10:58:14,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 28 msec 2023-07-12 10:58:14,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 10:58:14,095 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-12 10:58:14,095 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 10:58:14,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable GrouptestMultiTableMoveB 2023-07-12 10:58:14,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:14,099 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159494099"}]},"ts":"1689159494099"} 2023-07-12 10:58:14,100 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 10:58:14,102 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 10:58:14,105 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, UNASSIGN}] 2023-07-12 10:58:14,106 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, UNASSIGN 2023-07-12 10:58:14,107 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:14,107 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159494107"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159494107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159494107"}]},"ts":"1689159494107"} 2023-07-12 10:58:14,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:14,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:14,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:14,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 56108a624cdfec13e9ff4e567c057307, disabling compactions & flushes 2023-07-12 10:58:14,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:14,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:14,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. after waiting 0 ms 2023-07-12 10:58:14,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:14,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:14,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307. 2023-07-12 10:58:14,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 56108a624cdfec13e9ff4e567c057307: 2023-07-12 10:58:14,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:14,503 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=56108a624cdfec13e9ff4e567c057307, regionState=CLOSED 2023-07-12 10:58:14,503 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689159494503"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159494503"}]},"ts":"1689159494503"} 2023-07-12 10:58:14,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-12 10:58:14,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 56108a624cdfec13e9ff4e567c057307, server=jenkins-hbase9.apache.org,34631,1689159480880 in 183 msec 2023-07-12 10:58:14,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-12 10:58:14,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=56108a624cdfec13e9ff4e567c057307, UNASSIGN in 406 msec 2023-07-12 10:58:14,515 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159494515"}]},"ts":"1689159494515"} 2023-07-12 10:58:14,518 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 10:58:14,520 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 10:58:14,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:14,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 427 msec 2023-07-12 10:58:14,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 10:58:14,825 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-12 10:58:14,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete GrouptestMultiTableMoveB 2023-07-12 10:58:14,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,828 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_809721058' 2023-07-12 10:58:14,829 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:14,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:14,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:14,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:14,835 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:14,837 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits] 2023-07-12 10:58:14,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 10:58:14,843 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits/7.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307/recovered.edits/7.seqid 2023-07-12 10:58:14,844 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/GrouptestMultiTableMoveB/56108a624cdfec13e9ff4e567c057307 2023-07-12 10:58:14,844 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 10:58:14,846 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,848 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 10:58:14,850 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 10:58:14,851 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,851 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 10:58:14,851 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159494851"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:14,853 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:14,853 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 56108a624cdfec13e9ff4e567c057307, NAME => 'GrouptestMultiTableMoveB,,1689159492013.56108a624cdfec13e9ff4e567c057307.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:14,853 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 10:58:14,853 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159494853"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:14,854 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 10:58:14,856 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 10:58:14,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 31 msec 2023-07-12 10:58:14,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 10:58:14,940 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-12 10:58:14,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:14,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:14,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:14,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:14,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:14,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:14,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_809721058 2023-07-12 10:58:14,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:14,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:14,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:14,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_809721058, current retry=0 2023-07-12 10:58:14,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880] are moved back to Group_testMultiTableMove_809721058 2023-07-12 10:58:14,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_809721058 => default 2023-07-12 10:58:14,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:14,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testMultiTableMove_809721058 2023-07-12 10:58:14,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:14,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:14,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:14,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:14,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:14,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:14,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:14,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:14,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:14,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:14,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:14,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:14,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:14,983 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:14,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:14,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:14,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:14,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:14,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:15,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160695006, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:15,007 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,009 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,010 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,046 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=499 (was 505), OpenFileDescriptor=777 (was 786), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=322 (was 341), ProcessCount=172 (was 172), AvailableMemoryMB=6020 (was 6176) 2023-07-12 10:58:15,066 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=499, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=322, ProcessCount=172, AvailableMemoryMB=6020 2023-07-12 10:58:15,066 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 10:58:15,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:15,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:15,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:15,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,083 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:15,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:15,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:15,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160695098, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:15,099 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,101 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,102 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup oldGroup 2023-07-12 10:58:15,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup oldGroup 2023-07-12 10:58:15,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:15,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to default 2023-07-12 10:58:15,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 10:58:15,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 10:58:15,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 10:58:15,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup anotherRSGroup 2023-07-12 10:58:15,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 10:58:15,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:15,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937] to rsgroup anotherRSGroup 2023-07-12 10:58:15,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 10:58:15,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:15,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:15,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39937,1689159477046] are moved back to default 2023-07-12 10:58:15,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 10:58:15,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 10:58:15,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 10:58:15,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 10:58:15,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.2.10:34382 deadline: 1689160695211, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 10:58:15,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 10:58:15,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.2.10:34382 deadline: 1689160695214, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 10:58:15,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from default to newRSGroup2 2023-07-12 10:58:15,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.2.10:34382 deadline: 1689160695216, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 10:58:15,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldGroup to default 2023-07-12 10:58:15,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.2.10:34382 deadline: 1689160695217, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 10:58:15,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937] to rsgroup default 2023-07-12 10:58:15,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 10:58:15,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:15,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 10:58:15,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39937,1689159477046] are moved back to anotherRSGroup 2023-07-12 10:58:15,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 10:58:15,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup anotherRSGroup 2023-07-12 10:58:15,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 10:58:15,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:15,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 10:58:15,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 10:58:15,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to oldGroup 2023-07-12 10:58:15,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 10:58:15,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup oldGroup 2023-07-12 10:58:15,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:15,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:15,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:15,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:15,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,276 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:15,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:15,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:15,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160695293, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:15,294 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,296 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,297 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,320 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 499) Potentially hanging thread: hconnection-0x2be10837-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=777 (was 777), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=322 (was 322), ProcessCount=173 (was 172) - ProcessCount LEAK? -, AvailableMemoryMB=6038 (was 6020) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:15,320 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 10:58:15,343 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=503, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=322, ProcessCount=172, AvailableMemoryMB=6039 2023-07-12 10:58:15,343 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 10:58:15,347 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 10:58:15,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:15,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:15,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:15,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:15,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:15,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:15,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:15,366 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:15,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:15,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:15,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:15,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:15,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160695390, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:15,392 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:15,394 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:15,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,396 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:15,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:15,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup oldgroup 2023-07-12 10:58:15,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:15,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:15,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup oldgroup 2023-07-12 10:58:15,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:15,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:15,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to default 2023-07-12 10:58:15,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 10:58:15,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:15,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:15,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:15,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 10:58:15,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:15,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:15,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 10:58:15,430 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:15,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-12 10:58:15,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:15,432 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:15,433 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:15,433 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:15,433 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:15,437 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:15,438 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:15,439 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 empty. 2023-07-12 10:58:15,439 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:15,439 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 10:58:15,480 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:15,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:15,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:15,863 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:15,864 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 80a841fbae3b853b4dd566e3c56f7cf0, NAME => 'testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:15,876 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:15,877 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 80a841fbae3b853b4dd566e3c56f7cf0, disabling compactions & flushes 2023-07-12 10:58:15,877 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:15,877 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:15,877 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. after waiting 0 ms 2023-07-12 10:58:15,877 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:15,877 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:15,877 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:15,879 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:15,880 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159495880"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159495880"}]},"ts":"1689159495880"} 2023-07-12 10:58:15,881 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:15,882 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:15,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159495882"}]},"ts":"1689159495882"} 2023-07-12 10:58:15,883 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 10:58:15,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:15,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:15,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:15,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:15,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, ASSIGN}] 2023-07-12 10:58:15,889 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, ASSIGN 2023-07-12 10:58:15,890 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:16,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:16,040 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:16,042 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:16,042 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159496042"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159496042"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159496042"}]},"ts":"1689159496042"} 2023-07-12 10:58:16,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:16,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80a841fbae3b853b4dd566e3c56f7cf0, NAME => 'testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:16,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:16,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,202 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,203 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:16,203 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:16,204 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80a841fbae3b853b4dd566e3c56f7cf0 columnFamilyName tr 2023-07-12 10:58:16,204 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(310): Store=80a841fbae3b853b4dd566e3c56f7cf0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:16,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:16,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 80a841fbae3b853b4dd566e3c56f7cf0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11138736640, jitterRate=0.037375688552856445}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:16,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:16,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0., pid=113, masterSystemTime=1689159496195 2023-07-12 10:58:16,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,213 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:16,214 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159496213"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159496213"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159496213"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159496213"}]},"ts":"1689159496213"} 2023-07-12 10:58:16,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-12 10:58:16,217 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046 in 171 msec 2023-07-12 10:58:16,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-12 10:58:16,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, ASSIGN in 329 msec 2023-07-12 10:58:16,219 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:16,219 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159496219"}]},"ts":"1689159496219"} 2023-07-12 10:58:16,220 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 10:58:16,222 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:16,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 795 msec 2023-07-12 10:58:16,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 10:58:16,536 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-12 10:58:16,536 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 10:58:16,536 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:16,540 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 10:58:16,540 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:16,540 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 10:58:16,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [testRename] to rsgroup oldgroup 2023-07-12 10:58:16,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:16,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:16,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:16,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:16,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 10:58:16,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 80a841fbae3b853b4dd566e3c56f7cf0 to RSGroup oldgroup 2023-07-12 10:58:16,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:16,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:16,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:16,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:16,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:16,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE 2023-07-12 10:58:16,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 10:58:16,551 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE 2023-07-12 10:58:16,551 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:16,552 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159496551"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159496551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159496551"}]},"ts":"1689159496551"} 2023-07-12 10:58:16,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:16,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 80a841fbae3b853b4dd566e3c56f7cf0, disabling compactions & flushes 2023-07-12 10:58:16,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. after waiting 0 ms 2023-07-12 10:58:16,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:16,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:16,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:16,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 80a841fbae3b853b4dd566e3c56f7cf0 move to jenkins-hbase9.apache.org,34631,1689159480880 record at close sequenceid=2 2023-07-12 10:58:16,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:16,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=CLOSED 2023-07-12 10:58:16,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159496715"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159496715"}]},"ts":"1689159496715"} 2023-07-12 10:58:16,718 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 10:58:16,718 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046 in 164 msec 2023-07-12 10:58:16,720 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,34631,1689159480880; forceNewPlan=false, retain=false 2023-07-12 10:58:16,871 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:16,871 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:16,871 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159496871"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159496871"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159496871"}]},"ts":"1689159496871"} 2023-07-12 10:58:16,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:17,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:17,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80a841fbae3b853b4dd566e3c56f7cf0, NAME => 'testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:17,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:17,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,040 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,041 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:17,041 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:17,042 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80a841fbae3b853b4dd566e3c56f7cf0 columnFamilyName tr 2023-07-12 10:58:17,042 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(310): Store=80a841fbae3b853b4dd566e3c56f7cf0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:17,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:17,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 80a841fbae3b853b4dd566e3c56f7cf0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10034191840, jitterRate=-0.06549306213855743}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:17,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:17,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0., pid=116, masterSystemTime=1689159497033 2023-07-12 10:58:17,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:17,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:17,052 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:17,052 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159497052"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159497052"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159497052"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159497052"}]},"ts":"1689159497052"} 2023-07-12 10:58:17,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-12 10:58:17,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,34631,1689159480880 in 178 msec 2023-07-12 10:58:17,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE in 505 msec 2023-07-12 10:58:17,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-12 10:58:17,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 10:58:17,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:17,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:17,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:17,558 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:17,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-12 10:58:17,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:17,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 10:58:17,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:17,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-12 10:58:17,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:17,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:17,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:17,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup normal 2023-07-12 10:58:17,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:17,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:17,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:17,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:17,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:17,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:17,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:17,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:17,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937] to rsgroup normal 2023-07-12 10:58:17,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:17,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:17,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:17,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:17,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:17,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:17,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39937,1689159477046] are moved back to default 2023-07-12 10:58:17,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 10:58:17,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:17,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:17,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:17,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=normal 2023-07-12 10:58:17,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:17,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:17,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 10:58:17,593 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:17,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-12 10:58:17,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 10:58:17,595 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:17,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:17,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:17,596 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:17,597 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:17,603 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:17,605 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,606 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 empty. 2023-07-12 10:58:17,606 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,606 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 10:58:17,622 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:17,624 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 333f3e2f0b856b448ae4dcb22c19e113, NAME => 'unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 333f3e2f0b856b448ae4dcb22c19e113, disabling compactions & flushes 2023-07-12 10:58:17,635 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. after waiting 0 ms 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,635 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,635 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:17,638 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:17,639 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159497638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159497638"}]},"ts":"1689159497638"} 2023-07-12 10:58:17,640 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:17,641 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:17,641 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497641"}]},"ts":"1689159497641"} 2023-07-12 10:58:17,642 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 10:58:17,646 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, ASSIGN}] 2023-07-12 10:58:17,648 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, ASSIGN 2023-07-12 10:58:17,649 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:17,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 10:58:17,800 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:17,801 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159497800"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159497800"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159497800"}]},"ts":"1689159497800"} 2023-07-12 10:58:17,802 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:17,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 10:58:17,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 333f3e2f0b856b448ae4dcb22c19e113, NAME => 'unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:17,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:17,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,959 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,961 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:17,961 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:17,962 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 333f3e2f0b856b448ae4dcb22c19e113 columnFamilyName ut 2023-07-12 10:58:17,962 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(310): Store=333f3e2f0b856b448ae4dcb22c19e113/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:17,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:17,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:17,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 333f3e2f0b856b448ae4dcb22c19e113; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9412108960, jitterRate=-0.12342904508113861}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:17,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:17,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113., pid=119, masterSystemTime=1689159497954 2023-07-12 10:58:17,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:17,973 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:17,974 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159497973"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159497973"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159497973"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159497973"}]},"ts":"1689159497973"} 2023-07-12 10:58:17,977 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 10:58:17,977 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604 in 173 msec 2023-07-12 10:58:17,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 10:58:17,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, ASSIGN in 331 msec 2023-07-12 10:58:17,979 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:17,980 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159497979"}]},"ts":"1689159497979"} 2023-07-12 10:58:17,981 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 10:58:17,984 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:17,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 394 msec 2023-07-12 10:58:18,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 10:58:18,198 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-12 10:58:18,198 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 10:58:18,198 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,203 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 10:58:18,203 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:18,203 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 10:58:18,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [unmovedTable] to rsgroup normal 2023-07-12 10:58:18,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 10:58:18,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:18,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:18,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:18,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:18,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 10:58:18,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 333f3e2f0b856b448ae4dcb22c19e113 to RSGroup normal 2023-07-12 10:58:18,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE 2023-07-12 10:58:18,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 10:58:18,217 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE 2023-07-12 10:58:18,219 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:18,219 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159498218"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159498218"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159498218"}]},"ts":"1689159498218"} 2023-07-12 10:58:18,220 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:18,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 333f3e2f0b856b448ae4dcb22c19e113, disabling compactions & flushes 2023-07-12 10:58:18,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. after waiting 0 ms 2023-07-12 10:58:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:18,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:18,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 333f3e2f0b856b448ae4dcb22c19e113 move to jenkins-hbase9.apache.org,39937,1689159477046 record at close sequenceid=2 2023-07-12 10:58:18,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,392 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=CLOSED 2023-07-12 10:58:18,392 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159498392"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159498392"}]},"ts":"1689159498392"} 2023-07-12 10:58:18,396 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 10:58:18,396 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604 in 174 msec 2023-07-12 10:58:18,397 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:18,548 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:18,548 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159498548"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159498548"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159498548"}]},"ts":"1689159498548"} 2023-07-12 10:58:18,550 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:18,705 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 333f3e2f0b856b448ae4dcb22c19e113, NAME => 'unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:18,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:18,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,707 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,708 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:18,708 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:18,709 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 333f3e2f0b856b448ae4dcb22c19e113 columnFamilyName ut 2023-07-12 10:58:18,709 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(310): Store=333f3e2f0b856b448ae4dcb22c19e113/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:18,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:18,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 333f3e2f0b856b448ae4dcb22c19e113; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11275499520, jitterRate=0.05011272430419922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:18,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:18,716 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113., pid=122, masterSystemTime=1689159498701 2023-07-12 10:58:18,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:18,717 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:18,717 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159498717"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159498717"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159498717"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159498717"}]},"ts":"1689159498717"} 2023-07-12 10:58:18,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 10:58:18,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,39937,1689159477046 in 169 msec 2023-07-12 10:58:18,721 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE in 504 msec 2023-07-12 10:58:19,195 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-12 10:58:19,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 10:58:19,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 10:58:19,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:19,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,224 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:19,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 10:58:19,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:19,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=normal 2023-07-12 10:58:19,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:19,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 10:58:19,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:19,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldgroup to newgroup 2023-07-12 10:58:19,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:19,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:19,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 10:58:19,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 10:58:19,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=newgroup 2023-07-12 10:58:19,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:19,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-12 10:58:19,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:19,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 10:58:19,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:19,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:19,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:19,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [unmovedTable] to rsgroup default 2023-07-12 10:58:19,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:19,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:19,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:19,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:19,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:19,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 10:58:19,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 333f3e2f0b856b448ae4dcb22c19e113 to RSGroup default 2023-07-12 10:58:19,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE 2023-07-12 10:58:19,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 10:58:19,251 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE 2023-07-12 10:58:19,251 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:19,252 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159499251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159499251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159499251"}]},"ts":"1689159499251"} 2023-07-12 10:58:19,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:19,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 333f3e2f0b856b448ae4dcb22c19e113, disabling compactions & flushes 2023-07-12 10:58:19,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. after waiting 0 ms 2023-07-12 10:58:19,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:19,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:19,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 333f3e2f0b856b448ae4dcb22c19e113 move to jenkins-hbase9.apache.org,46327,1689159476604 record at close sequenceid=5 2023-07-12 10:58:19,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,418 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=CLOSED 2023-07-12 10:58:19,418 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159499418"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159499418"}]},"ts":"1689159499418"} 2023-07-12 10:58:19,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 10:58:19,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,39937,1689159477046 in 166 msec 2023-07-12 10:58:19,421 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:19,572 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:19,572 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159499572"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159499572"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159499572"}]},"ts":"1689159499572"} 2023-07-12 10:58:19,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:19,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 333f3e2f0b856b448ae4dcb22c19e113, NAME => 'unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:19,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:19,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,731 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,732 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:19,732 DEBUG [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/ut 2023-07-12 10:58:19,732 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 333f3e2f0b856b448ae4dcb22c19e113 columnFamilyName ut 2023-07-12 10:58:19,733 INFO [StoreOpener-333f3e2f0b856b448ae4dcb22c19e113-1] regionserver.HStore(310): Store=333f3e2f0b856b448ae4dcb22c19e113/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:19,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:19,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 333f3e2f0b856b448ae4dcb22c19e113; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11758627680, jitterRate=0.0951075404882431}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:19,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:19,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113., pid=125, masterSystemTime=1689159499725 2023-07-12 10:58:19,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:19,741 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=333f3e2f0b856b448ae4dcb22c19e113, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:19,741 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689159499741"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159499741"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159499741"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159499741"}]},"ts":"1689159499741"} 2023-07-12 10:58:19,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-12 10:58:19,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 333f3e2f0b856b448ae4dcb22c19e113, server=jenkins-hbase9.apache.org,46327,1689159476604 in 168 msec 2023-07-12 10:58:19,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=333f3e2f0b856b448ae4dcb22c19e113, REOPEN/MOVE in 494 msec 2023-07-12 10:58:20,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-12 10:58:20,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 10:58:20,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:20,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:39937] to rsgroup default 2023-07-12 10:58:20,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 10:58:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:20,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 10:58:20,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,39937,1689159477046] are moved back to normal 2023-07-12 10:58:20,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 10:58:20,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:20,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup normal 2023-07-12 10:58:20,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:20,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:20,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:20,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 10:58:20,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:20,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:20,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:20,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:20,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:20,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:20,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:20,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:20,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:20,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:20,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:20,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [testRename] to rsgroup default 2023-07-12 10:58:20,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:20,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:20,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:20,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 10:58:20,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(345): Moving region 80a841fbae3b853b4dd566e3c56f7cf0 to RSGroup default 2023-07-12 10:58:20,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE 2023-07-12 10:58:20,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 10:58:20,283 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE 2023-07-12 10:58:20,284 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:20,284 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159500284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159500284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159500284"}]},"ts":"1689159500284"} 2023-07-12 10:58:20,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,34631,1689159480880}] 2023-07-12 10:58:20,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 80a841fbae3b853b4dd566e3c56f7cf0, disabling compactions & flushes 2023-07-12 10:58:20,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. after waiting 0 ms 2023-07-12 10:58:20,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 10:58:20,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:20,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 80a841fbae3b853b4dd566e3c56f7cf0 move to jenkins-hbase9.apache.org,39937,1689159477046 record at close sequenceid=5 2023-07-12 10:58:20,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,448 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=CLOSED 2023-07-12 10:58:20,448 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159500448"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159500448"}]},"ts":"1689159500448"} 2023-07-12 10:58:20,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 10:58:20,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,34631,1689159480880 in 165 msec 2023-07-12 10:58:20,452 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:20,603 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:20,603 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:20,603 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159500603"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159500603"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159500603"}]},"ts":"1689159500603"} 2023-07-12 10:58:20,605 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:20,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80a841fbae3b853b4dd566e3c56f7cf0, NAME => 'testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,763 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,764 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:20,764 DEBUG [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/tr 2023-07-12 10:58:20,765 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80a841fbae3b853b4dd566e3c56f7cf0 columnFamilyName tr 2023-07-12 10:58:20,766 INFO [StoreOpener-80a841fbae3b853b4dd566e3c56f7cf0-1] regionserver.HStore(310): Store=80a841fbae3b853b4dd566e3c56f7cf0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:20,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:20,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 80a841fbae3b853b4dd566e3c56f7cf0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11988419840, jitterRate=0.1165086030960083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:20,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:20,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0., pid=128, masterSystemTime=1689159500757 2023-07-12 10:58:20,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:20,776 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=80a841fbae3b853b4dd566e3c56f7cf0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:20,776 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689159500776"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159500776"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159500776"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159500776"}]},"ts":"1689159500776"} 2023-07-12 10:58:20,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 10:58:20,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 80a841fbae3b853b4dd566e3c56f7cf0, server=jenkins-hbase9.apache.org,39937,1689159477046 in 172 msec 2023-07-12 10:58:20,783 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=80a841fbae3b853b4dd566e3c56f7cf0, REOPEN/MOVE in 499 msec 2023-07-12 10:58:21,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 10:58:21,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 10:58:21,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:21,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 10:58:21,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 10:58:21,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to newgroup 2023-07-12 10:58:21,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 10:58:21,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup newgroup 2023-07-12 10:58:21,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,297 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:21,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160701312, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:21,313 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,314 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,316 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,338 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=498 (was 503), OpenFileDescriptor=754 (was 777), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 322), ProcessCount=170 (was 172), AvailableMemoryMB=8125 (was 6039) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:21,354 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=498, OpenFileDescriptor=754, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8124 2023-07-12 10:58:21,354 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 10:58:21,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:21,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:21,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:21,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:21,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,367 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:21,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160701377, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:21,378 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,380 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,381 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 10:58:21,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:21,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 10:58:21,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 10:58:21,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bogus 2023-07-12 10:58:21,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bogus 2023-07-12 10:58:21,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.2.10:34382 deadline: 1689160701390, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 10:58:21,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [bogus:123] to rsgroup bogus 2023-07-12 10:58:21,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.2.10:34382 deadline: 1689160701392, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 10:58:21,394 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:21,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=true 2023-07-12 10:58:21,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.2.10 balance rsgroup, group=bogus 2023-07-12 10:58:21,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.2.10:34382 deadline: 1689160701400, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 10:58:21,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:21,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:21,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:21,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:21,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,413 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:21,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160701423, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:21,426 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,428 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,430 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,450 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=502 (was 498) Potentially hanging thread: hconnection-0x5310854c-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5310854c-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=753 (was 754), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8124 (was 8124) 2023-07-12 10:58:21,450 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 10:58:21,465 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502, OpenFileDescriptor=753, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8123 2023-07-12 10:58:21,465 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 10:58:21,465 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 10:58:21,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:21,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:21,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:21,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:21,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:21,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:21,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:21,477 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:21,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:21,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:21,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:21,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:21,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160701488, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:21,489 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:21,490 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:21,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,491 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:21,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:21,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:21,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:21,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:21,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 10:58:21,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to default 2023-07-12 10:58:21,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:21,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:21,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:21,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:21,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:21,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:21,527 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:21,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-12 10:58:21,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 10:58:21,529 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:21,529 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:21,530 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:21,530 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:21,532 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:21,535 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,535 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:21,535 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,535 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,535 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d empty. 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 empty. 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 empty. 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 empty. 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e empty. 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,536 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,537 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,537 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:21,537 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 10:58:21,552 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:21,554 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => b2fa4476a300de230748314eea6a8d6e, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:21,554 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 026e0ec89cb1a85fdb48556cf732e729, NAME => 'Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:21,554 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8fb9eebd295be7eeef305491aada96f1, NAME => 'Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8fb9eebd295be7eeef305491aada96f1, disabling compactions & flushes 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 026e0ec89cb1a85fdb48556cf732e729, disabling compactions & flushes 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,590 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:21,590 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. after waiting 0 ms 2023-07-12 10:58:21,590 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing b2fa4476a300de230748314eea6a8d6e, disabling compactions & flushes 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. after waiting 0 ms 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,591 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:21,591 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 026e0ec89cb1a85fdb48556cf732e729: 2023-07-12 10:58:21,591 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:21,591 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8fb9eebd295be7eeef305491aada96f1: 2023-07-12 10:58:21,592 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. after waiting 0 ms 2023-07-12 10:58:21,592 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => f028546910f9506853187e2757ec2d1d, NAME => 'Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:21,592 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3f5a3cf79edc90932fa731afa5980076, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp 2023-07-12 10:58:21,592 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:21,592 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:21,592 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for b2fa4476a300de230748314eea6a8d6e: 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing f028546910f9506853187e2757ec2d1d, disabling compactions & flushes 2023-07-12 10:58:21,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. after waiting 0 ms 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for f028546910f9506853187e2757ec2d1d: 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 3f5a3cf79edc90932fa731afa5980076, disabling compactions & flushes 2023-07-12 10:58:21,627 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. after waiting 0 ms 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,627 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 3f5a3cf79edc90932fa731afa5980076: 2023-07-12 10:58:21,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 10:58:21,630 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:21,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501630"}]},"ts":"1689159501630"} 2023-07-12 10:58:21,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501630"}]},"ts":"1689159501630"} 2023-07-12 10:58:21,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501630"}]},"ts":"1689159501630"} 2023-07-12 10:58:21,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501630"}]},"ts":"1689159501630"} 2023-07-12 10:58:21,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159501630"}]},"ts":"1689159501630"} 2023-07-12 10:58:21,633 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 10:58:21,634 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:21,634 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159501634"}]},"ts":"1689159501634"} 2023-07-12 10:58:21,635 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 10:58:21,638 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:21,638 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:21,638 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:21,638 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:21,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, ASSIGN}] 2023-07-12 10:58:21,641 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, ASSIGN 2023-07-12 10:58:21,641 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, ASSIGN 2023-07-12 10:58:21,641 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, ASSIGN 2023-07-12 10:58:21,641 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, ASSIGN 2023-07-12 10:58:21,641 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:21,641 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:21,642 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, ASSIGN 2023-07-12 10:58:21,642 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:21,642 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,46327,1689159476604; forceNewPlan=false, retain=false 2023-07-12 10:58:21,642 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,39937,1689159477046; forceNewPlan=false, retain=false 2023-07-12 10:58:21,703 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 10:58:21,792 INFO [jenkins-hbase9:38969] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 10:58:21,796 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f028546910f9506853187e2757ec2d1d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:21,796 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=8fb9eebd295be7eeef305491aada96f1, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:21,796 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=b2fa4476a300de230748314eea6a8d6e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:21,797 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501796"}]},"ts":"1689159501796"} 2023-07-12 10:58:21,797 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501796"}]},"ts":"1689159501796"} 2023-07-12 10:58:21,796 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=026e0ec89cb1a85fdb48556cf732e729, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:21,796 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=3f5a3cf79edc90932fa731afa5980076, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:21,797 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501796"}]},"ts":"1689159501796"} 2023-07-12 10:58:21,796 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501796"}]},"ts":"1689159501796"} 2023-07-12 10:58:21,797 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501796"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159501796"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159501796"}]},"ts":"1689159501796"} 2023-07-12 10:58:21,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=130, state=RUNNABLE; OpenRegionProcedure 8fb9eebd295be7eeef305491aada96f1, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:21,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=132, state=RUNNABLE; OpenRegionProcedure b2fa4476a300de230748314eea6a8d6e, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:21,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=131, state=RUNNABLE; OpenRegionProcedure 026e0ec89cb1a85fdb48556cf732e729, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:21,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure f028546910f9506853187e2757ec2d1d, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:21,801 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=133, state=RUNNABLE; OpenRegionProcedure 3f5a3cf79edc90932fa731afa5980076, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:21,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 10:58:21,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f028546910f9506853187e2757ec2d1d, NAME => 'Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 10:58:21,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,956 INFO [StoreOpener-f028546910f9506853187e2757ec2d1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,958 DEBUG [StoreOpener-f028546910f9506853187e2757ec2d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/f 2023-07-12 10:58:21,958 DEBUG [StoreOpener-f028546910f9506853187e2757ec2d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/f 2023-07-12 10:58:21,958 INFO [StoreOpener-f028546910f9506853187e2757ec2d1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f028546910f9506853187e2757ec2d1d columnFamilyName f 2023-07-12 10:58:21,959 INFO [StoreOpener-f028546910f9506853187e2757ec2d1d-1] regionserver.HStore(310): Store=f028546910f9506853187e2757ec2d1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:21,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:21,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:21,967 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f028546910f9506853187e2757ec2d1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10096013920, jitterRate=-0.059735432267189026}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:21,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f028546910f9506853187e2757ec2d1d: 2023-07-12 10:58:21,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d., pid=138, masterSystemTime=1689159501950 2023-07-12 10:58:21,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:21,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8fb9eebd295be7eeef305491aada96f1, NAME => 'Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 10:58:21,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,971 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f028546910f9506853187e2757ec2d1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:21,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,971 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501971"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159501971"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159501971"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159501971"}]},"ts":"1689159501971"} 2023-07-12 10:58:21,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3f5a3cf79edc90932fa731afa5980076, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 10:58:21,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,975 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-12 10:58:21,975 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure f028546910f9506853187e2757ec2d1d, server=jenkins-hbase9.apache.org,39937,1689159477046 in 173 msec 2023-07-12 10:58:21,975 INFO [StoreOpener-8fb9eebd295be7eeef305491aada96f1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,975 INFO [StoreOpener-3f5a3cf79edc90932fa731afa5980076-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, ASSIGN in 336 msec 2023-07-12 10:58:21,977 DEBUG [StoreOpener-8fb9eebd295be7eeef305491aada96f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/f 2023-07-12 10:58:21,977 DEBUG [StoreOpener-3f5a3cf79edc90932fa731afa5980076-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/f 2023-07-12 10:58:21,977 DEBUG [StoreOpener-8fb9eebd295be7eeef305491aada96f1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/f 2023-07-12 10:58:21,977 DEBUG [StoreOpener-3f5a3cf79edc90932fa731afa5980076-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/f 2023-07-12 10:58:21,978 INFO [StoreOpener-8fb9eebd295be7eeef305491aada96f1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8fb9eebd295be7eeef305491aada96f1 columnFamilyName f 2023-07-12 10:58:21,978 INFO [StoreOpener-3f5a3cf79edc90932fa731afa5980076-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3f5a3cf79edc90932fa731afa5980076 columnFamilyName f 2023-07-12 10:58:21,978 INFO [StoreOpener-3f5a3cf79edc90932fa731afa5980076-1] regionserver.HStore(310): Store=3f5a3cf79edc90932fa731afa5980076/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:21,978 INFO [StoreOpener-8fb9eebd295be7eeef305491aada96f1-1] regionserver.HStore(310): Store=8fb9eebd295be7eeef305491aada96f1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:21,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:21,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:21,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:21,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:21,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 8fb9eebd295be7eeef305491aada96f1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11844286400, jitterRate=0.10308513045310974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:21,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 8fb9eebd295be7eeef305491aada96f1: 2023-07-12 10:58:21,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1., pid=135, masterSystemTime=1689159501950 2023-07-12 10:58:21,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 3f5a3cf79edc90932fa731afa5980076; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11910183680, jitterRate=0.10922229290008545}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:21,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 3f5a3cf79edc90932fa731afa5980076: 2023-07-12 10:58:21,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:21,989 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=8fb9eebd295be7eeef305491aada96f1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:21,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076., pid=139, masterSystemTime=1689159501967 2023-07-12 10:58:21,990 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159501989"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159501989"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159501989"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159501989"}]},"ts":"1689159501989"} 2023-07-12 10:58:21,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:21,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:21,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 026e0ec89cb1a85fdb48556cf732e729, NAME => 'Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 10:58:21,992 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=3f5a3cf79edc90932fa731afa5980076, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:21,992 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159501992"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159501992"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159501992"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159501992"}]},"ts":"1689159501992"} 2023-07-12 10:58:21,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:21,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=130 2023-07-12 10:58:21,994 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=130, state=SUCCESS; OpenRegionProcedure 8fb9eebd295be7eeef305491aada96f1, server=jenkins-hbase9.apache.org,39937,1689159477046 in 193 msec 2023-07-12 10:58:21,994 INFO [StoreOpener-026e0ec89cb1a85fdb48556cf732e729-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,997 DEBUG [StoreOpener-026e0ec89cb1a85fdb48556cf732e729-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/f 2023-07-12 10:58:21,997 DEBUG [StoreOpener-026e0ec89cb1a85fdb48556cf732e729-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/f 2023-07-12 10:58:21,997 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, ASSIGN in 355 msec 2023-07-12 10:58:21,997 INFO [StoreOpener-026e0ec89cb1a85fdb48556cf732e729-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 026e0ec89cb1a85fdb48556cf732e729 columnFamilyName f 2023-07-12 10:58:21,998 INFO [StoreOpener-026e0ec89cb1a85fdb48556cf732e729-1] regionserver.HStore(310): Store=026e0ec89cb1a85fdb48556cf732e729/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:21,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=133 2023-07-12 10:58:21,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=133, state=SUCCESS; OpenRegionProcedure 3f5a3cf79edc90932fa731afa5980076, server=jenkins-hbase9.apache.org,46327,1689159476604 in 193 msec 2023-07-12 10:58:21,999 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, ASSIGN in 359 msec 2023-07-12 10:58:21,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:21,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:22,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 026e0ec89cb1a85fdb48556cf732e729; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9464020960, jitterRate=-0.11859436333179474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:22,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 026e0ec89cb1a85fdb48556cf732e729: 2023-07-12 10:58:22,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729., pid=137, masterSystemTime=1689159501967 2023-07-12 10:58:22,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,008 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=026e0ec89cb1a85fdb48556cf732e729, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b2fa4476a300de230748314eea6a8d6e, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 10:58:22,008 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502008"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159502008"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159502008"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159502008"}]},"ts":"1689159502008"} 2023-07-12 10:58:22,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:22,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,010 INFO [StoreOpener-b2fa4476a300de230748314eea6a8d6e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=131 2023-07-12 10:58:22,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=131, state=SUCCESS; OpenRegionProcedure 026e0ec89cb1a85fdb48556cf732e729, server=jenkins-hbase9.apache.org,46327,1689159476604 in 210 msec 2023-07-12 10:58:22,012 DEBUG [StoreOpener-b2fa4476a300de230748314eea6a8d6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/f 2023-07-12 10:58:22,012 DEBUG [StoreOpener-b2fa4476a300de230748314eea6a8d6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/f 2023-07-12 10:58:22,012 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, ASSIGN in 372 msec 2023-07-12 10:58:22,012 INFO [StoreOpener-b2fa4476a300de230748314eea6a8d6e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b2fa4476a300de230748314eea6a8d6e columnFamilyName f 2023-07-12 10:58:22,013 INFO [StoreOpener-b2fa4476a300de230748314eea6a8d6e-1] regionserver.HStore(310): Store=b2fa4476a300de230748314eea6a8d6e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:22,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:22,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b2fa4476a300de230748314eea6a8d6e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11623811200, jitterRate=0.08255177736282349}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:22,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b2fa4476a300de230748314eea6a8d6e: 2023-07-12 10:58:22,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e., pid=136, masterSystemTime=1689159501967 2023-07-12 10:58:22,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,021 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=b2fa4476a300de230748314eea6a8d6e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,021 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502021"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159502021"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159502021"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159502021"}]},"ts":"1689159502021"} 2023-07-12 10:58:22,023 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=132 2023-07-12 10:58:22,024 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; OpenRegionProcedure b2fa4476a300de230748314eea6a8d6e, server=jenkins-hbase9.apache.org,46327,1689159476604 in 223 msec 2023-07-12 10:58:22,025 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-12 10:58:22,025 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, ASSIGN in 385 msec 2023-07-12 10:58:22,025 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:22,025 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502025"}]},"ts":"1689159502025"} 2023-07-12 10:58:22,026 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 10:58:22,034 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:22,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 509 msec 2023-07-12 10:58:22,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 10:58:22,131 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-12 10:58:22,131 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 10:58:22,131 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,135 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 10:58:22,136 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,136 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 10:58:22,136 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 10:58:22,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:22,143 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 10:58:22,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testDisabledTableMove 2023-07-12 10:58:22,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 10:58:22,147 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502147"}]},"ts":"1689159502147"} 2023-07-12 10:58:22,149 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 10:58:22,150 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 10:58:22,151 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, UNASSIGN}] 2023-07-12 10:58:22,152 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, UNASSIGN 2023-07-12 10:58:22,152 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, UNASSIGN 2023-07-12 10:58:22,152 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, UNASSIGN 2023-07-12 10:58:22,153 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, UNASSIGN 2023-07-12 10:58:22,153 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, UNASSIGN 2023-07-12 10:58:22,153 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=026e0ec89cb1a85fdb48556cf732e729, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502153"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502153"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502153"}]},"ts":"1689159502153"} 2023-07-12 10:58:22,154 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8fb9eebd295be7eeef305491aada96f1, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:22,154 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=b2fa4476a300de230748314eea6a8d6e, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,154 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159502154"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502154"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502154"}]},"ts":"1689159502154"} 2023-07-12 10:58:22,154 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502154"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502154"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502154"}]},"ts":"1689159502154"} 2023-07-12 10:58:22,154 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f028546910f9506853187e2757ec2d1d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:22,154 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159502154"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502154"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502154"}]},"ts":"1689159502154"} 2023-07-12 10:58:22,154 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=3f5a3cf79edc90932fa731afa5980076, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,155 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502154"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159502154"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159502154"}]},"ts":"1689159502154"} 2023-07-12 10:58:22,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure 026e0ec89cb1a85fdb48556cf732e729, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:22,156 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure 8fb9eebd295be7eeef305491aada96f1, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:22,156 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=143, state=RUNNABLE; CloseRegionProcedure b2fa4476a300de230748314eea6a8d6e, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:22,157 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure f028546910f9506853187e2757ec2d1d, server=jenkins-hbase9.apache.org,39937,1689159477046}] 2023-07-12 10:58:22,158 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 3f5a3cf79edc90932fa731afa5980076, server=jenkins-hbase9.apache.org,46327,1689159476604}] 2023-07-12 10:58:22,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 10:58:22,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:22,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:22,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 8fb9eebd295be7eeef305491aada96f1, disabling compactions & flushes 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 3f5a3cf79edc90932fa731afa5980076, disabling compactions & flushes 2023-07-12 10:58:22,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:22,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. after waiting 0 ms 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. after waiting 0 ms 2023-07-12 10:58:22,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:22,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:22,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:22,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076. 2023-07-12 10:58:22,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 3f5a3cf79edc90932fa731afa5980076: 2023-07-12 10:58:22,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1. 2023-07-12 10:58:22,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 8fb9eebd295be7eeef305491aada96f1: 2023-07-12 10:58:22,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:22,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 026e0ec89cb1a85fdb48556cf732e729, disabling compactions & flushes 2023-07-12 10:58:22,317 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. after waiting 0 ms 2023-07-12 10:58:22,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,317 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=3f5a3cf79edc90932fa731afa5980076, regionState=CLOSED 2023-07-12 10:58:22,317 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502317"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502317"}]},"ts":"1689159502317"} 2023-07-12 10:58:22,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:22,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:22,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f028546910f9506853187e2757ec2d1d, disabling compactions & flushes 2023-07-12 10:58:22,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:22,319 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8fb9eebd295be7eeef305491aada96f1, regionState=CLOSED 2023-07-12 10:58:22,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:22,319 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159502319"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502319"}]},"ts":"1689159502319"} 2023-07-12 10:58:22,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. after waiting 0 ms 2023-07-12 10:58:22,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:22,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-12 10:58:22,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 3f5a3cf79edc90932fa731afa5980076, server=jenkins-hbase9.apache.org,46327,1689159476604 in 161 msec 2023-07-12 10:58:22,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:22,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-12 10:58:22,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure 8fb9eebd295be7eeef305491aada96f1, server=jenkins-hbase9.apache.org,39937,1689159477046 in 164 msec 2023-07-12 10:58:22,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729. 2023-07-12 10:58:22,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 026e0ec89cb1a85fdb48556cf732e729: 2023-07-12 10:58:22,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3f5a3cf79edc90932fa731afa5980076, UNASSIGN in 170 msec 2023-07-12 10:58:22,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:22,324 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8fb9eebd295be7eeef305491aada96f1, UNASSIGN in 171 msec 2023-07-12 10:58:22,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d. 2023-07-12 10:58:22,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f028546910f9506853187e2757ec2d1d: 2023-07-12 10:58:22,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b2fa4476a300de230748314eea6a8d6e, disabling compactions & flushes 2023-07-12 10:58:22,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,325 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=026e0ec89cb1a85fdb48556cf732e729, regionState=CLOSED 2023-07-12 10:58:22,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. after waiting 0 ms 2023-07-12 10:58:22,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,325 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502325"}]},"ts":"1689159502325"} 2023-07-12 10:58:22,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:22,326 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f028546910f9506853187e2757ec2d1d, regionState=CLOSED 2023-07-12 10:58:22,326 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689159502326"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502326"}]},"ts":"1689159502326"} 2023-07-12 10:58:22,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=142 2023-07-12 10:58:22,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=142, state=SUCCESS; CloseRegionProcedure 026e0ec89cb1a85fdb48556cf732e729, server=jenkins-hbase9.apache.org,46327,1689159476604 in 171 msec 2023-07-12 10:58:22,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:22,329 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-12 10:58:22,329 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure f028546910f9506853187e2757ec2d1d, server=jenkins-hbase9.apache.org,39937,1689159477046 in 170 msec 2023-07-12 10:58:22,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e. 2023-07-12 10:58:22,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=026e0ec89cb1a85fdb48556cf732e729, UNASSIGN in 177 msec 2023-07-12 10:58:22,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b2fa4476a300de230748314eea6a8d6e: 2023-07-12 10:58:22,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f028546910f9506853187e2757ec2d1d, UNASSIGN in 178 msec 2023-07-12 10:58:22,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,332 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=b2fa4476a300de230748314eea6a8d6e, regionState=CLOSED 2023-07-12 10:58:22,332 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689159502332"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159502332"}]},"ts":"1689159502332"} 2023-07-12 10:58:22,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-12 10:58:22,335 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; CloseRegionProcedure b2fa4476a300de230748314eea6a8d6e, server=jenkins-hbase9.apache.org,46327,1689159476604 in 178 msec 2023-07-12 10:58:22,336 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=140 2023-07-12 10:58:22,336 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b2fa4476a300de230748314eea6a8d6e, UNASSIGN in 184 msec 2023-07-12 10:58:22,337 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159502337"}]},"ts":"1689159502337"} 2023-07-12 10:58:22,338 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 10:58:22,339 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 10:58:22,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 196 msec 2023-07-12 10:58:22,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 10:58:22,450 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-12 10:58:22,450 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:22,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 10:58:22,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1961977511, current retry=0 2023-07-12 10:58:22,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1961977511. 2023-07-12 10:58:22,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:22,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 10:58:22,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:22,463 INFO [Listener at localhost/35677] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 10:58:22,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testDisabledTableMove 2023-07-12 10:58:22,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:22,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.2.10:34382 deadline: 1689159562463, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 10:58:22,464 DEBUG [Listener at localhost/35677] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 10:58:22,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testDisabledTableMove 2023-07-12 10:58:22,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,467 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1961977511' 2023-07-12 10:58:22,468 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:22,475 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:22,475 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:22,475 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:22,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,475 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 10:58:22,478 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/recovered.edits] 2023-07-12 10:58:22,478 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/recovered.edits] 2023-07-12 10:58:22,478 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/recovered.edits] 2023-07-12 10:58:22,479 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/recovered.edits] 2023-07-12 10:58:22,480 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/f, FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/recovered.edits] 2023-07-12 10:58:22,489 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729/recovered.edits/4.seqid 2023-07-12 10:58:22,489 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e/recovered.edits/4.seqid 2023-07-12 10:58:22,490 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d/recovered.edits/4.seqid 2023-07-12 10:58:22,490 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1/recovered.edits/4.seqid 2023-07-12 10:58:22,490 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/b2fa4476a300de230748314eea6a8d6e 2023-07-12 10:58:22,490 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/026e0ec89cb1a85fdb48556cf732e729 2023-07-12 10:58:22,491 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/8fb9eebd295be7eeef305491aada96f1 2023-07-12 10:58:22,492 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/recovered.edits/4.seqid to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/archive/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076/recovered.edits/4.seqid 2023-07-12 10:58:22,492 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/f028546910f9506853187e2757ec2d1d 2023-07-12 10:58:22,492 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/.tmp/data/default/Group_testDisabledTableMove/3f5a3cf79edc90932fa731afa5980076 2023-07-12 10:58:22,492 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 10:58:22,495 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,497 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 10:58:22,503 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 10:58:22,504 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,504 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 10:58:22,504 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159502504"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,505 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159502504"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,505 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159502504"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,505 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159502504"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,505 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159502504"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,507 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 10:58:22,507 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8fb9eebd295be7eeef305491aada96f1, NAME => 'Group_testDisabledTableMove,,1689159501524.8fb9eebd295be7eeef305491aada96f1.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 026e0ec89cb1a85fdb48556cf732e729, NAME => 'Group_testDisabledTableMove,aaaaa,1689159501524.026e0ec89cb1a85fdb48556cf732e729.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b2fa4476a300de230748314eea6a8d6e, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689159501524.b2fa4476a300de230748314eea6a8d6e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3f5a3cf79edc90932fa731afa5980076, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689159501524.3f5a3cf79edc90932fa731afa5980076.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f028546910f9506853187e2757ec2d1d, NAME => 'Group_testDisabledTableMove,zzzzz,1689159501524.f028546910f9506853187e2757ec2d1d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 10:58:22,507 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 10:58:22,507 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159502507"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:22,508 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 10:58:22,510 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 10:58:22,511 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 47 msec 2023-07-12 10:58:22,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 10:58:22,578 INFO [Listener at localhost/35677] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-12 10:58:22,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:22,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:22,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:22,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:34631] to rsgroup default 2023-07-12 10:58:22,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:22,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1961977511, current retry=0 2023-07-12 10:58:22,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,34631,1689159480880, jenkins-hbase9.apache.org,37587,1689159476845] are moved back to Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1961977511 => default 2023-07-12 10:58:22,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:22,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testDisabledTableMove_1961977511 2023-07-12 10:58:22,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:22,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:22,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:22,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:22,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:22,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:22,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:22,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:22,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:22,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:22,609 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:22,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:22,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:22,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:22,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:22,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160702623, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:22,624 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:22,626 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,627 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:22,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:22,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:22,648 INFO [Listener at localhost/35677] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=506 (was 502) Potentially hanging thread: hconnection-0x41a50c31-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1495442723_17 at /127.0.0.1:57588 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-83128980_17 at /127.0.0.1:59690 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2be10837-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=782 (was 753) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=312 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8200 (was 8123) - AvailableMemoryMB LEAK? - 2023-07-12 10:58:22,648 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-12 10:58:22,666 INFO [Listener at localhost/35677] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=506, OpenFileDescriptor=782, MaxFileDescriptor=60000, SystemLoadAverage=312, ProcessCount=170, AvailableMemoryMB=8199 2023-07-12 10:58:22,666 WARN [Listener at localhost/35677] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-12 10:58:22,666 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 10:58:22,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:22,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:22,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:22,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:22,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:22,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:22,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:22,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:22,680 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:22,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:22,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:22,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:22,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:22,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:22,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:38969] to rsgroup master 2023-07-12 10:58:22,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:22,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:34382 deadline: 1689160702692, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. 2023-07-12 10:58:22,693 WARN [Listener at localhost/35677] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:38969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:22,695 INFO [Listener at localhost/35677] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:22,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:22,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:22,696 INFO [Listener at localhost/35677] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34631, jenkins-hbase9.apache.org:37587, jenkins-hbase9.apache.org:39937, jenkins-hbase9.apache.org:46327], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:22,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:22,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:22,697 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 10:58:22,697 INFO [Listener at localhost/35677] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:22,697 DEBUG [Listener at localhost/35677] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29253406 to 127.0.0.1:52490 2023-07-12 10:58:22,697 DEBUG [Listener at localhost/35677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,701 DEBUG [Listener at localhost/35677] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:22,701 DEBUG [Listener at localhost/35677] util.JVMClusterUtil(257): Found active master hash=1096677079, stopped=false 2023-07-12 10:58:22,701 DEBUG [Listener at localhost/35677] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:22,701 DEBUG [Listener at localhost/35677] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:22,701 INFO [Listener at localhost/35677] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:58:22,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:22,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:22,702 INFO [Listener at localhost/35677] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:22,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:22,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:22,703 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:22,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:22,703 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:22,703 DEBUG [Listener at localhost/35677] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x532248e8 to 127.0.0.1:52490 2023-07-12 10:58:22,703 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:22,703 DEBUG [Listener at localhost/35677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:22,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:22,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:22,704 INFO [Listener at localhost/35677] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,46327,1689159476604' ***** 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,37587,1689159476845' ***** 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:22,705 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,39937,1689159477046' ***** 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:22,705 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:22,705 INFO [Listener at localhost/35677] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,34631,1689159480880' ***** 2023-07-12 10:58:22,705 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:22,706 INFO [Listener at localhost/35677] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:22,710 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:22,724 INFO [RS:1;jenkins-hbase9:37587] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1ffcc362{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:22,724 INFO [RS:3;jenkins-hbase9:34631] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@519b7a7c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:22,724 INFO [RS:2;jenkins-hbase9:39937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7a6735a9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:22,724 INFO [RS:0;jenkins-hbase9:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5cdf3cad{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:22,728 INFO [RS:2;jenkins-hbase9:39937] server.AbstractConnector(383): Stopped ServerConnector@5df4bbff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,728 INFO [RS:1;jenkins-hbase9:37587] server.AbstractConnector(383): Stopped ServerConnector@46770358{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,728 INFO [RS:0;jenkins-hbase9:46327] server.AbstractConnector(383): Stopped ServerConnector@5c80b18{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,728 INFO [RS:3;jenkins-hbase9:34631] server.AbstractConnector(383): Stopped ServerConnector@1c038137{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,728 INFO [RS:0;jenkins-hbase9:46327] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:22,728 INFO [RS:1;jenkins-hbase9:37587] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:22,728 INFO [RS:2;jenkins-hbase9:39937] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:22,729 INFO [RS:0;jenkins-hbase9:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b8a2300{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:22,728 INFO [RS:3;jenkins-hbase9:34631] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:22,731 INFO [RS:2;jenkins-hbase9:39937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@933d05e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:22,731 INFO [RS:0;jenkins-hbase9:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4f598a02{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:22,730 INFO [RS:1;jenkins-hbase9:37587] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ba1cd62{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:22,732 INFO [RS:2;jenkins-hbase9:39937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c0d721c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:22,733 INFO [RS:1;jenkins-hbase9:37587] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19e0a4a6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:22,732 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:22,732 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,732 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,732 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,732 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,731 INFO [RS:3;jenkins-hbase9:34631] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@726126ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:22,734 INFO [RS:3;jenkins-hbase9:34631] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:22,736 INFO [RS:1;jenkins-hbase9:37587] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:22,736 INFO [RS:3;jenkins-hbase9:34631] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:22,736 INFO [RS:0;jenkins-hbase9:46327] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:22,736 INFO [RS:2;jenkins-hbase9:39937] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:22,736 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:22,736 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:22,736 INFO [RS:2;jenkins-hbase9:39937] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:22,736 INFO [RS:0;jenkins-hbase9:46327] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:22,736 INFO [RS:2;jenkins-hbase9:39937] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:22,736 INFO [RS:1;jenkins-hbase9:37587] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:22,736 INFO [RS:3;jenkins-hbase9:34631] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:22,736 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(3305): Received CLOSE for 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:22,736 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:22,736 INFO [RS:3;jenkins-hbase9:34631] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:22,736 INFO [RS:1;jenkins-hbase9:37587] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:22,737 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:22,737 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:22,737 DEBUG [RS:3;jenkins-hbase9:34631] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x54244694 to 127.0.0.1:52490 2023-07-12 10:58:22,737 DEBUG [RS:1;jenkins-hbase9:37587] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5905e33e to 127.0.0.1:52490 2023-07-12 10:58:22,736 INFO [RS:0;jenkins-hbase9:46327] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:22,737 DEBUG [RS:1;jenkins-hbase9:37587] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,737 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(3305): Received CLOSE for 15c37347446d32713d4f7b66d1f2d533 2023-07-12 10:58:22,737 DEBUG [RS:3;jenkins-hbase9:34631] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,737 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:22,738 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,34631,1689159480880; all regions closed. 2023-07-12 10:58:22,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 15c37347446d32713d4f7b66d1f2d533, disabling compactions & flushes 2023-07-12 10:58:22,738 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,37587,1689159476845; all regions closed. 2023-07-12 10:58:22,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 80a841fbae3b853b4dd566e3c56f7cf0, disabling compactions & flushes 2023-07-12 10:58:22,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:22,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:22,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. after waiting 0 ms 2023-07-12 10:58:22,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:22,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 15c37347446d32713d4f7b66d1f2d533 1/1 column families, dataSize=22.08 KB heapSize=36.54 KB 2023-07-12 10:58:22,739 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(3305): Received CLOSE for 078a255e8fc6041e4f36118ed48c04a1 2023-07-12 10:58:22,739 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(3305): Received CLOSE for 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:22,739 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,738 DEBUG [RS:2;jenkins-hbase9:39937] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x079af665 to 127.0.0.1:52490 2023-07-12 10:58:22,739 DEBUG [RS:2;jenkins-hbase9:39937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,739 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:22,739 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1478): Online Regions={80a841fbae3b853b4dd566e3c56f7cf0=testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0.} 2023-07-12 10:58:22,740 DEBUG [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1504): Waiting on 80a841fbae3b853b4dd566e3c56f7cf0 2023-07-12 10:58:22,739 DEBUG [RS:0;jenkins-hbase9:46327] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x780006b3 to 127.0.0.1:52490 2023-07-12 10:58:22,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:22,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:22,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. after waiting 0 ms 2023-07-12 10:58:22,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:22,741 DEBUG [RS:0;jenkins-hbase9:46327] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,741 INFO [RS:0;jenkins-hbase9:46327] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:22,741 INFO [RS:0;jenkins-hbase9:46327] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:22,741 INFO [RS:0;jenkins-hbase9:46327] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:22,741 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:22,745 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 10:58:22,745 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 15c37347446d32713d4f7b66d1f2d533=hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533., 078a255e8fc6041e4f36118ed48c04a1=hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1., 333f3e2f0b856b448ae4dcb22c19e113=unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113.} 2023-07-12 10:58:22,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:22,745 DEBUG [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1504): Waiting on 078a255e8fc6041e4f36118ed48c04a1, 1588230740, 15c37347446d32713d4f7b66d1f2d533, 333f3e2f0b856b448ae4dcb22c19e113 2023-07-12 10:58:22,745 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:22,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:22,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:22,745 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:22,746 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=79.40 KB heapSize=125.29 KB 2023-07-12 10:58:22,763 DEBUG [RS:3;jenkins-hbase9:34631] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs 2023-07-12 10:58:22,763 INFO [RS:3;jenkins-hbase9:34631] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C34631%2C1689159480880:(num 1689159481361) 2023-07-12 10:58:22,763 DEBUG [RS:3;jenkins-hbase9:34631] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,763 INFO [RS:3;jenkins-hbase9:34631] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/testRename/80a841fbae3b853b4dd566e3c56f7cf0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 10:58:22,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:22,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 80a841fbae3b853b4dd566e3c56f7cf0: 2023-07-12 10:58:22,772 INFO [RS:3;jenkins-hbase9:34631] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:22,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689159495427.80a841fbae3b853b4dd566e3c56f7cf0. 2023-07-12 10:58:22,772 INFO [RS:3;jenkins-hbase9:34631] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:22,773 INFO [RS:3;jenkins-hbase9:34631] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:22,773 INFO [RS:3;jenkins-hbase9:34631] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:22,772 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:22,774 INFO [RS:3;jenkins-hbase9:34631] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:34631 2023-07-12 10:58:22,775 DEBUG [RS:1;jenkins-hbase9:37587] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs 2023-07-12 10:58:22,775 INFO [RS:1;jenkins-hbase9:37587] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C37587%2C1689159476845:(num 1689159479179) 2023-07-12 10:58:22,776 DEBUG [RS:1;jenkins-hbase9:37587] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,776 INFO [RS:1;jenkins-hbase9:37587] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,776 INFO [RS:1;jenkins-hbase9:37587] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:22,776 INFO [RS:1;jenkins-hbase9:37587] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:22,777 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:22,777 INFO [RS:1;jenkins-hbase9:37587] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:22,777 INFO [RS:1;jenkins-hbase9:37587] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:22,778 INFO [RS:1;jenkins-hbase9:37587] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:37587 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34631,1689159480880 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:22,789 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:22,790 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,37587,1689159476845] 2023-07-12 10:58:22,790 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,37587,1689159476845; numProcessing=1 2023-07-12 10:58:22,791 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37587,1689159476845 2023-07-12 10:58:22,791 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,37587,1689159476845 already deleted, retry=false 2023-07-12 10:58:22,791 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,37587,1689159476845 expired; onlineServers=3 2023-07-12 10:58:22,791 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,34631,1689159480880] 2023-07-12 10:58:22,791 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,34631,1689159480880; numProcessing=2 2023-07-12 10:58:22,792 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,34631,1689159480880 already deleted, retry=false 2023-07-12 10:58:22,792 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,34631,1689159480880 expired; onlineServers=2 2023-07-12 10:58:22,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.08 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/f02a0ee020ec4d96adceeb64805bbbef 2023-07-12 10:58:22,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f02a0ee020ec4d96adceeb64805bbbef 2023-07-12 10:58:22,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/.tmp/m/f02a0ee020ec4d96adceeb64805bbbef as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/f02a0ee020ec4d96adceeb64805bbbef 2023-07-12 10:58:22,808 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.42 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/info/3171e3cccd5b482789afee9ab798bf59 2023-07-12 10:58:22,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f02a0ee020ec4d96adceeb64805bbbef 2023-07-12 10:58:22,814 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3171e3cccd5b482789afee9ab798bf59 2023-07-12 10:58:22,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/m/f02a0ee020ec4d96adceeb64805bbbef, entries=22, sequenceid=107, filesize=5.9 K 2023-07-12 10:58:22,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.08 KB/22614, heapSize ~36.52 KB/37400, currentSize=0 B/0 for 15c37347446d32713d4f7b66d1f2d533 in 76ms, sequenceid=107, compaction requested=true 2023-07-12 10:58:22,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/rsgroup/15c37347446d32713d4f7b66d1f2d533/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-12 10:58:22,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:22,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 15c37347446d32713d4f7b66d1f2d533: 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159479869.15c37347446d32713d4f7b66d1f2d533. 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 078a255e8fc6041e4f36118ed48c04a1, disabling compactions & flushes 2023-07-12 10:58:22,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. after waiting 0 ms 2023-07-12 10:58:22,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:22,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 078a255e8fc6041e4f36118ed48c04a1 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 10:58:22,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/rep_barrier/a2b291b130d84e738bae94310b2eb6c4 2023-07-12 10:58:22,839 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a2b291b130d84e738bae94310b2eb6c4 2023-07-12 10:58:22,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/.tmp/info/832a14dc06014b6184cf39b5858f4e86 2023-07-12 10:58:22,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/.tmp/info/832a14dc06014b6184cf39b5858f4e86 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/info/832a14dc06014b6184cf39b5858f4e86 2023-07-12 10:58:22,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/info/832a14dc06014b6184cf39b5858f4e86, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 10:58:22,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 078a255e8fc6041e4f36118ed48c04a1 in 47ms, sequenceid=6, compaction requested=false 2023-07-12 10:58:22,872 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/table/8a4d06559adb4a498cd3e12a44f27ff8 2023-07-12 10:58:22,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/namespace/078a255e8fc6041e4f36118ed48c04a1/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 10:58:22,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 078a255e8fc6041e4f36118ed48c04a1: 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159479788.078a255e8fc6041e4f36118ed48c04a1. 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 333f3e2f0b856b448ae4dcb22c19e113, disabling compactions & flushes 2023-07-12 10:58:22,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. after waiting 0 ms 2023-07-12 10:58:22,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:22,881 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8a4d06559adb4a498cd3e12a44f27ff8 2023-07-12 10:58:22,882 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/info/3171e3cccd5b482789afee9ab798bf59 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/info/3171e3cccd5b482789afee9ab798bf59 2023-07-12 10:58:22,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/default/unmovedTable/333f3e2f0b856b448ae4dcb22c19e113/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 10:58:22,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:22,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 333f3e2f0b856b448ae4dcb22c19e113: 2023-07-12 10:58:22,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689159497590.333f3e2f0b856b448ae4dcb22c19e113. 2023-07-12 10:58:22,889 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3171e3cccd5b482789afee9ab798bf59 2023-07-12 10:58:22,889 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/info/3171e3cccd5b482789afee9ab798bf59, entries=100, sequenceid=204, filesize=16.3 K 2023-07-12 10:58:22,890 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/rep_barrier/a2b291b130d84e738bae94310b2eb6c4 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/rep_barrier/a2b291b130d84e738bae94310b2eb6c4 2023-07-12 10:58:22,896 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a2b291b130d84e738bae94310b2eb6c4 2023-07-12 10:58:22,896 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/rep_barrier/a2b291b130d84e738bae94310b2eb6c4, entries=18, sequenceid=204, filesize=6.9 K 2023-07-12 10:58:22,897 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/.tmp/table/8a4d06559adb4a498cd3e12a44f27ff8 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/table/8a4d06559adb4a498cd3e12a44f27ff8 2023-07-12 10:58:22,900 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:22,900 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:37587-0x1015920de080002, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:22,901 INFO [RS:1;jenkins-hbase9:37587] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,37587,1689159476845; zookeeper connection closed. 2023-07-12 10:58:22,901 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4c298fd2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4c298fd2 2023-07-12 10:58:22,904 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8a4d06559adb4a498cd3e12a44f27ff8 2023-07-12 10:58:22,904 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/table/8a4d06559adb4a498cd3e12a44f27ff8, entries=31, sequenceid=204, filesize=7.4 K 2023-07-12 10:58:22,905 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~79.40 KB/81309, heapSize ~125.24 KB/128248, currentSize=0 B/0 for 1588230740 in 160ms, sequenceid=204, compaction requested=false 2023-07-12 10:58:22,918 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/data/hbase/meta/1588230740/recovered.edits/207.seqid, newMaxSeqId=207, maxSeqId=1 2023-07-12 10:58:22,918 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:22,919 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:22,919 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:22,919 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:22,940 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,39937,1689159477046; all regions closed. 2023-07-12 10:58:22,945 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,46327,1689159476604; all regions closed. 2023-07-12 10:58:22,948 DEBUG [RS:2;jenkins-hbase9:39937] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs 2023-07-12 10:58:22,949 INFO [RS:2;jenkins-hbase9:39937] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C39937%2C1689159477046:(num 1689159479180) 2023-07-12 10:58:22,949 DEBUG [RS:2;jenkins-hbase9:39937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,949 INFO [RS:2;jenkins-hbase9:39937] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,950 INFO [RS:2;jenkins-hbase9:39937] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:22,950 INFO [RS:2;jenkins-hbase9:39937] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:22,950 INFO [RS:2;jenkins-hbase9:39937] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:22,951 INFO [RS:2;jenkins-hbase9:39937] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:22,951 INFO [RS:2;jenkins-hbase9:39937] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:39937 2023-07-12 10:58:22,952 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:22,955 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:22,955 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39937,1689159477046 2023-07-12 10:58:22,955 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,955 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,39937,1689159477046] 2023-07-12 10:58:22,955 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,39937,1689159477046; numProcessing=3 2023-07-12 10:58:22,956 DEBUG [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs 2023-07-12 10:58:22,956 INFO [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C46327%2C1689159476604.meta:.meta(num 1689159479471) 2023-07-12 10:58:22,957 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,39937,1689159477046 already deleted, retry=false 2023-07-12 10:58:22,957 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,39937,1689159477046 expired; onlineServers=1 2023-07-12 10:58:22,962 DEBUG [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/oldWALs 2023-07-12 10:58:22,962 INFO [RS:0;jenkins-hbase9:46327] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C46327%2C1689159476604:(num 1689159479180) 2023-07-12 10:58:22,962 DEBUG [RS:0;jenkins-hbase9:46327] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,963 INFO [RS:0;jenkins-hbase9:46327] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:22,963 INFO [RS:0;jenkins-hbase9:46327] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:22,963 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:22,964 INFO [RS:0;jenkins-hbase9:46327] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:46327 2023-07-12 10:58:22,967 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,46327,1689159476604 2023-07-12 10:58:22,967 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:22,969 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,46327,1689159476604] 2023-07-12 10:58:22,969 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,46327,1689159476604; numProcessing=4 2023-07-12 10:58:22,970 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,46327,1689159476604 already deleted, retry=false 2023-07-12 10:58:22,970 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,46327,1689159476604 expired; onlineServers=0 2023-07-12 10:58:22,970 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,38969,1689159474659' ***** 2023-07-12 10:58:22,970 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:22,971 DEBUG [M:0;jenkins-hbase9:38969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60fcec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:22,971 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:22,972 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:22,973 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:22,973 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:22,973 INFO [M:0;jenkins-hbase9:38969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66df7ad1{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:22,973 INFO [M:0;jenkins-hbase9:38969] server.AbstractConnector(383): Stopped ServerConnector@631e341c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,974 INFO [M:0;jenkins-hbase9:38969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:22,974 INFO [M:0;jenkins-hbase9:38969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b463d55{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:22,975 INFO [M:0;jenkins-hbase9:38969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@563e1db6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:22,975 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,38969,1689159474659 2023-07-12 10:58:22,975 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,38969,1689159474659; all regions closed. 2023-07-12 10:58:22,975 DEBUG [M:0;jenkins-hbase9:38969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:22,975 INFO [M:0;jenkins-hbase9:38969] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:22,976 INFO [M:0;jenkins-hbase9:38969] server.AbstractConnector(383): Stopped ServerConnector@59729c1c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:22,976 DEBUG [M:0;jenkins-hbase9:38969] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:22,977 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:22,977 DEBUG [M:0;jenkins-hbase9:38969] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:22,977 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159478752] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159478752,5,FailOnTimeoutGroup] 2023-07-12 10:58:22,977 INFO [M:0;jenkins-hbase9:38969] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:22,977 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159478753] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159478753,5,FailOnTimeoutGroup] 2023-07-12 10:58:22,977 INFO [M:0;jenkins-hbase9:38969] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:22,977 INFO [M:0;jenkins-hbase9:38969] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-12 10:58:22,977 DEBUG [M:0;jenkins-hbase9:38969] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:22,977 INFO [M:0;jenkins-hbase9:38969] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:22,978 ERROR [M:0;jenkins-hbase9:38969] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 10:58:22,978 INFO [M:0;jenkins-hbase9:38969] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:22,978 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:22,979 DEBUG [M:0;jenkins-hbase9:38969] zookeeper.ZKUtil(398): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:22,979 WARN [M:0;jenkins-hbase9:38969] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:22,979 INFO [M:0;jenkins-hbase9:38969] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:22,979 INFO [M:0;jenkins-hbase9:38969] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:22,979 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:22,979 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:22,979 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:22,979 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:22,979 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:22,979 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=509.52 KB heapSize=609.52 KB 2023-07-12 10:58:22,995 INFO [M:0;jenkins-hbase9:38969] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=509.52 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e83514bb327b4f6dad044b8c6d354259 2023-07-12 10:58:23,000 INFO [RS:3;jenkins-hbase9:34631] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,34631,1689159480880; zookeeper connection closed. 2023-07-12 10:58:23,000 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,000 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:34631-0x1015920de08000b, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,000 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@41972d90] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@41972d90 2023-07-12 10:58:23,002 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e83514bb327b4f6dad044b8c6d354259 as hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e83514bb327b4f6dad044b8c6d354259 2023-07-12 10:58:23,007 INFO [M:0;jenkins-hbase9:38969] regionserver.HStore(1080): Added hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e83514bb327b4f6dad044b8c6d354259, entries=151, sequenceid=1128, filesize=26.6 K 2023-07-12 10:58:23,008 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegion(2948): Finished flush of dataSize ~509.52 KB/521746, heapSize ~609.51 KB/624136, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=1128, compaction requested=false 2023-07-12 10:58:23,009 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:23,010 DEBUG [M:0;jenkins-hbase9:38969] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:23,021 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/MasterData/WALs/jenkins-hbase9.apache.org,38969,1689159474659/jenkins-hbase9.apache.org%2C38969%2C1689159474659.1689159477777 not finished, retry = 0 2023-07-12 10:58:23,122 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:23,123 INFO [M:0;jenkins-hbase9:38969] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:23,123 INFO [M:0;jenkins-hbase9:38969] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:38969 2023-07-12 10:58:23,126 DEBUG [M:0;jenkins-hbase9:38969] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,38969,1689159474659 already deleted, retry=false 2023-07-12 10:58:23,601 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,601 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): master:38969-0x1015920de080000, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,601 INFO [M:0;jenkins-hbase9:38969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,38969,1689159474659; zookeeper connection closed. 2023-07-12 10:58:23,701 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,701 INFO [RS:0;jenkins-hbase9:46327] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,46327,1689159476604; zookeeper connection closed. 2023-07-12 10:58:23,702 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:46327-0x1015920de080001, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,702 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19800e52] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19800e52 2023-07-12 10:58:23,802 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,802 INFO [RS:2;jenkins-hbase9:39937] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,39937,1689159477046; zookeeper connection closed. 2023-07-12 10:58:23,802 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): regionserver:39937-0x1015920de080003, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:23,802 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@68497063] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@68497063 2023-07-12 10:58:23,802 INFO [Listener at localhost/35677] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 10:58:23,803 WARN [Listener at localhost/35677] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:23,808 INFO [Listener at localhost/35677] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:23,813 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:23,813 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-372595763-172.31.2.10-1689159471177 (Datanode Uuid 9c477974-4950-45df-93d6-941a878dce48) service to localhost/127.0.0.1:36583 2023-07-12 10:58:23,815 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data5/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,815 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data6/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,818 WARN [Listener at localhost/35677] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:23,821 INFO [Listener at localhost/35677] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:23,827 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:23,827 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-372595763-172.31.2.10-1689159471177 (Datanode Uuid c3652c40-4af8-4815-817e-8c41a9db1a48) service to localhost/127.0.0.1:36583 2023-07-12 10:58:23,828 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data3/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,828 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data4/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,830 WARN [Listener at localhost/35677] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:23,835 INFO [Listener at localhost/35677] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:23,937 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:23,937 WARN [BP-372595763-172.31.2.10-1689159471177 heartbeating to localhost/127.0.0.1:36583] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-372595763-172.31.2.10-1689159471177 (Datanode Uuid 56647dd7-3fc8-4faf-8ac2-c333fa567928) service to localhost/127.0.0.1:36583 2023-07-12 10:58:23,938 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data1/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,938 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/cluster_b795fb16-46d7-bc95-a989-8cba1bea2719/dfs/data/data2/current/BP-372595763-172.31.2.10-1689159471177] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:23,969 INFO [Listener at localhost/35677] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:24,087 INFO [Listener at localhost/35677] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 10:58:24,136 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 10:58:24,136 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 10:58:24,136 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.log.dir so I do NOT create it in target/test-data/def3674d-6c86-f65a-310b-855514e8ca64 2023-07-12 10:58:24,136 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/19882403-41d4-e551-af46-4a434b94f356/hadoop.tmp.dir so I do NOT create it in target/test-data/def3674d-6c86-f65a-310b-855514e8ca64 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903, deleteOnExit=true 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/test.cache.data in system properties and HBase conf 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir in system properties and HBase conf 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 10:58:24,137 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 10:58:24,137 DEBUG [Listener at localhost/35677] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:58:24,138 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/nfs.dump.dir in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 10:58:24,139 INFO [Listener at localhost/35677] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 10:58:24,143 WARN [Listener at localhost/35677] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:58:24,143 WARN [Listener at localhost/35677] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:58:24,182 WARN [Listener at localhost/35677] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:24,184 INFO [Listener at localhost/35677] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:24,185 DEBUG [Listener at localhost/35677-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015920de08000a, quorum=127.0.0.1:52490, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 10:58:24,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015920de08000a, quorum=127.0.0.1:52490, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 10:58:24,188 INFO [Listener at localhost/35677] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/Jetty_localhost_44819_hdfs____.uqtv1y/webapp 2023-07-12 10:58:24,288 INFO [Listener at localhost/35677] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44819 2023-07-12 10:58:24,293 WARN [Listener at localhost/35677] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:58:24,293 WARN [Listener at localhost/35677] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:58:24,336 WARN [Listener at localhost/44937] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:24,355 WARN [Listener at localhost/44937] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:24,357 WARN [Listener at localhost/44937] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:24,359 INFO [Listener at localhost/44937] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:24,364 INFO [Listener at localhost/44937] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/Jetty_localhost_36071_datanode____is41ta/webapp 2023-07-12 10:58:24,469 INFO [Listener at localhost/44937] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36071 2023-07-12 10:58:24,476 WARN [Listener at localhost/33099] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:24,494 WARN [Listener at localhost/33099] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:24,496 WARN [Listener at localhost/33099] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:24,498 INFO [Listener at localhost/33099] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:24,505 INFO [Listener at localhost/33099] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/Jetty_localhost_34569_datanode____.c6vysi/webapp 2023-07-12 10:58:24,590 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf84792ddf49df10d: Processing first storage report for DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618 from datanode 346287a7-7a51-437f-a62b-295e24f1e944 2023-07-12 10:58:24,590 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf84792ddf49df10d: from storage DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618 node DatanodeRegistration(127.0.0.1:39071, datanodeUuid=346287a7-7a51-437f-a62b-295e24f1e944, infoPort=37279, infoSecurePort=0, ipcPort=33099, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,591 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf84792ddf49df10d: Processing first storage report for DS-547bbab7-4ada-41c1-8a49-f2b4bf21e1f9 from datanode 346287a7-7a51-437f-a62b-295e24f1e944 2023-07-12 10:58:24,591 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf84792ddf49df10d: from storage DS-547bbab7-4ada-41c1-8a49-f2b4bf21e1f9 node DatanodeRegistration(127.0.0.1:39071, datanodeUuid=346287a7-7a51-437f-a62b-295e24f1e944, infoPort=37279, infoSecurePort=0, ipcPort=33099, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,618 INFO [Listener at localhost/33099] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34569 2023-07-12 10:58:24,626 WARN [Listener at localhost/37129] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:24,649 WARN [Listener at localhost/37129] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:24,652 WARN [Listener at localhost/37129] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:24,653 INFO [Listener at localhost/37129] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:24,656 INFO [Listener at localhost/37129] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/Jetty_localhost_41159_datanode____f6ldjz/webapp 2023-07-12 10:58:24,731 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b082505170282a1: Processing first storage report for DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97 from datanode 563dfeb3-2d02-44cc-be8a-39ec9cc16707 2023-07-12 10:58:24,731 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b082505170282a1: from storage DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97 node DatanodeRegistration(127.0.0.1:32801, datanodeUuid=563dfeb3-2d02-44cc-be8a-39ec9cc16707, infoPort=37661, infoSecurePort=0, ipcPort=37129, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,731 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b082505170282a1: Processing first storage report for DS-7f63d062-9d2e-4396-8bb7-43c2e5a2c4b3 from datanode 563dfeb3-2d02-44cc-be8a-39ec9cc16707 2023-07-12 10:58:24,731 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b082505170282a1: from storage DS-7f63d062-9d2e-4396-8bb7-43c2e5a2c4b3 node DatanodeRegistration(127.0.0.1:32801, datanodeUuid=563dfeb3-2d02-44cc-be8a-39ec9cc16707, infoPort=37661, infoSecurePort=0, ipcPort=37129, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,770 INFO [Listener at localhost/37129] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41159 2023-07-12 10:58:24,778 WARN [Listener at localhost/44365] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:24,864 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:24,864 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:24,864 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:24,887 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x32963db3934eed2a: Processing first storage report for DS-bc6ca73f-d318-4496-9017-485aa892364e from datanode e1b28d60-9778-4343-be30-f4828c7c77c1 2023-07-12 10:58:24,887 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x32963db3934eed2a: from storage DS-bc6ca73f-d318-4496-9017-485aa892364e node DatanodeRegistration(127.0.0.1:41919, datanodeUuid=e1b28d60-9778-4343-be30-f4828c7c77c1, infoPort=42139, infoSecurePort=0, ipcPort=44365, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,887 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x32963db3934eed2a: Processing first storage report for DS-a93f52ab-8118-4523-9790-5e71e82acf0c from datanode e1b28d60-9778-4343-be30-f4828c7c77c1 2023-07-12 10:58:24,887 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x32963db3934eed2a: from storage DS-a93f52ab-8118-4523-9790-5e71e82acf0c node DatanodeRegistration(127.0.0.1:41919, datanodeUuid=e1b28d60-9778-4343-be30-f4828c7c77c1, infoPort=42139, infoSecurePort=0, ipcPort=44365, storageInfo=lv=-57;cid=testClusterID;nsid=1711347443;c=1689159504146), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:24,988 DEBUG [Listener at localhost/44365] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64 2023-07-12 10:58:24,990 INFO [Listener at localhost/44365] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/zookeeper_0, clientPort=50934, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 10:58:24,992 INFO [Listener at localhost/44365] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50934 2023-07-12 10:58:24,992 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:24,993 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,010 INFO [Listener at localhost/44365] util.FSUtils(471): Created version file at hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06 with version=8 2023-07-12 10:58:25,010 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/hbase-staging 2023-07-12 10:58:25,011 DEBUG [Listener at localhost/44365] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 10:58:25,012 DEBUG [Listener at localhost/44365] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 10:58:25,012 DEBUG [Listener at localhost/44365] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 10:58:25,012 DEBUG [Listener at localhost/44365] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 10:58:25,013 INFO [Listener at localhost/44365] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:25,013 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,013 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,014 INFO [Listener at localhost/44365] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:25,014 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,014 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:25,014 INFO [Listener at localhost/44365] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:25,015 INFO [Listener at localhost/44365] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43287 2023-07-12 10:58:25,016 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,017 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,019 INFO [Listener at localhost/44365] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43287 connecting to ZooKeeper ensemble=127.0.0.1:50934 2023-07-12 10:58:25,028 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:432870x0, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:25,029 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43287-0x101592157fd0000 connected 2023-07-12 10:58:25,054 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:25,054 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:25,054 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:25,060 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43287 2023-07-12 10:58:25,060 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43287 2023-07-12 10:58:25,060 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43287 2023-07-12 10:58:25,062 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43287 2023-07-12 10:58:25,062 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43287 2023-07-12 10:58:25,064 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:25,064 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:25,064 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:25,065 INFO [Listener at localhost/44365] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:58:25,065 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:25,065 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:25,065 INFO [Listener at localhost/44365] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:25,065 INFO [Listener at localhost/44365] http.HttpServer(1146): Jetty bound to port 39113 2023-07-12 10:58:25,066 INFO [Listener at localhost/44365] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:25,069 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,069 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@56f0345d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:25,070 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,070 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d03266b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:25,184 INFO [Listener at localhost/44365] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:25,185 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:25,185 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:25,185 INFO [Listener at localhost/44365] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:25,186 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,187 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@37863041{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/jetty-0_0_0_0-39113-hbase-server-2_4_18-SNAPSHOT_jar-_-any-541444316655460305/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:25,188 INFO [Listener at localhost/44365] server.AbstractConnector(333): Started ServerConnector@2991b76c{HTTP/1.1, (http/1.1)}{0.0.0.0:39113} 2023-07-12 10:58:25,188 INFO [Listener at localhost/44365] server.Server(415): Started @36087ms 2023-07-12 10:58:25,188 INFO [Listener at localhost/44365] master.HMaster(444): hbase.rootdir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06, hbase.cluster.distributed=false 2023-07-12 10:58:25,202 INFO [Listener at localhost/44365] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:25,202 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,202 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,203 INFO [Listener at localhost/44365] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:25,203 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,203 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:25,203 INFO [Listener at localhost/44365] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:25,204 INFO [Listener at localhost/44365] ipc.NettyRpcServer(120): Bind to /172.31.2.10:40093 2023-07-12 10:58:25,204 INFO [Listener at localhost/44365] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:25,205 DEBUG [Listener at localhost/44365] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:25,206 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,207 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,208 INFO [Listener at localhost/44365] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40093 connecting to ZooKeeper ensemble=127.0.0.1:50934 2023-07-12 10:58:25,211 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:400930x0, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:25,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40093-0x101592157fd0001 connected 2023-07-12 10:58:25,213 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:25,214 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:25,214 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:25,217 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40093 2023-07-12 10:58:25,218 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40093 2023-07-12 10:58:25,218 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40093 2023-07-12 10:58:25,220 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40093 2023-07-12 10:58:25,221 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40093 2023-07-12 10:58:25,223 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:25,223 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:25,223 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:25,223 INFO [Listener at localhost/44365] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:25,224 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:25,224 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:25,224 INFO [Listener at localhost/44365] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:25,225 INFO [Listener at localhost/44365] http.HttpServer(1146): Jetty bound to port 34225 2023-07-12 10:58:25,225 INFO [Listener at localhost/44365] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:25,227 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,228 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5a43795a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:25,228 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,228 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2dcd6b23{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:25,341 INFO [Listener at localhost/44365] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:25,342 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:25,342 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:25,343 INFO [Listener at localhost/44365] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:25,343 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,344 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@230e1553{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/jetty-0_0_0_0-34225-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4520949369104220904/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:25,346 INFO [Listener at localhost/44365] server.AbstractConnector(333): Started ServerConnector@4c16c29d{HTTP/1.1, (http/1.1)}{0.0.0.0:34225} 2023-07-12 10:58:25,346 INFO [Listener at localhost/44365] server.Server(415): Started @36245ms 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:25,358 INFO [Listener at localhost/44365] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:25,360 INFO [Listener at localhost/44365] ipc.NettyRpcServer(120): Bind to /172.31.2.10:45603 2023-07-12 10:58:25,360 INFO [Listener at localhost/44365] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:25,361 DEBUG [Listener at localhost/44365] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:25,362 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,363 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,364 INFO [Listener at localhost/44365] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45603 connecting to ZooKeeper ensemble=127.0.0.1:50934 2023-07-12 10:58:25,368 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:456030x0, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:25,369 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:456030x0, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:25,370 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45603-0x101592157fd0002 connected 2023-07-12 10:58:25,370 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:25,371 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:25,373 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45603 2023-07-12 10:58:25,374 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45603 2023-07-12 10:58:25,374 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45603 2023-07-12 10:58:25,378 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45603 2023-07-12 10:58:25,378 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45603 2023-07-12 10:58:25,380 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:25,380 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:25,380 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:25,381 INFO [Listener at localhost/44365] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:25,381 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:25,381 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:25,382 INFO [Listener at localhost/44365] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:25,382 INFO [Listener at localhost/44365] http.HttpServer(1146): Jetty bound to port 44987 2023-07-12 10:58:25,383 INFO [Listener at localhost/44365] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:25,386 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,387 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b118b49{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:25,387 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,387 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e68d44a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:25,501 INFO [Listener at localhost/44365] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:25,502 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:25,502 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:25,502 INFO [Listener at localhost/44365] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:25,503 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,504 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a24290f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/jetty-0_0_0_0-44987-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1842589734146713145/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:25,506 INFO [Listener at localhost/44365] server.AbstractConnector(333): Started ServerConnector@727af871{HTTP/1.1, (http/1.1)}{0.0.0.0:44987} 2023-07-12 10:58:25,506 INFO [Listener at localhost/44365] server.Server(415): Started @36405ms 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:25,518 INFO [Listener at localhost/44365] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:25,520 INFO [Listener at localhost/44365] ipc.NettyRpcServer(120): Bind to /172.31.2.10:37051 2023-07-12 10:58:25,520 INFO [Listener at localhost/44365] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:25,521 DEBUG [Listener at localhost/44365] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:25,522 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,523 INFO [Listener at localhost/44365] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,524 INFO [Listener at localhost/44365] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37051 connecting to ZooKeeper ensemble=127.0.0.1:50934 2023-07-12 10:58:25,528 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:370510x0, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:25,530 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:370510x0, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:25,531 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37051-0x101592157fd0003 connected 2023-07-12 10:58:25,531 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:25,532 DEBUG [Listener at localhost/44365] zookeeper.ZKUtil(164): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:25,532 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37051 2023-07-12 10:58:25,532 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37051 2023-07-12 10:58:25,533 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37051 2023-07-12 10:58:25,533 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37051 2023-07-12 10:58:25,533 DEBUG [Listener at localhost/44365] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37051 2023-07-12 10:58:25,535 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:25,535 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:25,535 INFO [Listener at localhost/44365] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:25,535 INFO [Listener at localhost/44365] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:25,536 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:25,536 INFO [Listener at localhost/44365] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:25,536 INFO [Listener at localhost/44365] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:25,536 INFO [Listener at localhost/44365] http.HttpServer(1146): Jetty bound to port 36565 2023-07-12 10:58:25,536 INFO [Listener at localhost/44365] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:25,538 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,538 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@500bceaf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:25,539 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,539 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6df3a6c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:25,656 INFO [Listener at localhost/44365] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:25,657 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:25,657 INFO [Listener at localhost/44365] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:25,658 INFO [Listener at localhost/44365] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:25,659 INFO [Listener at localhost/44365] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:25,660 INFO [Listener at localhost/44365] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@543f114d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/java.io.tmpdir/jetty-0_0_0_0-36565-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2853594254414203670/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:25,661 INFO [Listener at localhost/44365] server.AbstractConnector(333): Started ServerConnector@6bbc580d{HTTP/1.1, (http/1.1)}{0.0.0.0:36565} 2023-07-12 10:58:25,661 INFO [Listener at localhost/44365] server.Server(415): Started @36560ms 2023-07-12 10:58:25,665 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:25,669 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5102f4f5{HTTP/1.1, (http/1.1)}{0.0.0.0:37375} 2023-07-12 10:58:25,669 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @36568ms 2023-07-12 10:58:25,669 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,670 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:25,671 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,672 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:25,672 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:25,672 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:25,672 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:25,673 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:25,674 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:25,676 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,43287,1689159505012 from backup master directory 2023-07-12 10:58:25,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:25,677 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,677 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:25,677 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:25,677 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,692 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/hbase.id with ID: 9b0b595c-53d7-4d15-8148-b0bb3bb9ca96 2023-07-12 10:58:25,703 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:25,707 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:25,721 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x750a48ff to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:25,725 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74ded25e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:25,725 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:25,725 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:58:25,726 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:25,727 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store-tmp 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:25,741 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:25,741 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:25,741 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:25,742 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/WALs/jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,745 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43287%2C1689159505012, suffix=, logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/WALs/jenkins-hbase9.apache.org,43287,1689159505012, archiveDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/oldWALs, maxLogs=10 2023-07-12 10:58:25,760 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK] 2023-07-12 10:58:25,764 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK] 2023-07-12 10:58:25,764 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK] 2023-07-12 10:58:25,768 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/WALs/jenkins-hbase9.apache.org,43287,1689159505012/jenkins-hbase9.apache.org%2C43287%2C1689159505012.1689159505745 2023-07-12 10:58:25,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK], DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK]] 2023-07-12 10:58:25,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:25,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:25,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,771 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,772 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:58:25,773 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:58:25,774 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:25,774 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,775 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,778 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:25,780 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:25,780 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10840532320, jitterRate=0.009603247046470642}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:25,780 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:25,781 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:58:25,782 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:58:25,782 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:58:25,782 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:58:25,782 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 10:58:25,783 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 10:58:25,783 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:58:25,786 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 10:58:25,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:58:25,787 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 10:58:25,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:58:25,788 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:58:25,790 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:25,790 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:58:25,790 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:58:25,791 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:58:25,792 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:25,792 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:25,792 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:25,793 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:25,793 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,43287,1689159505012, sessionid=0x101592157fd0000, setting cluster-up flag (Was=false) 2023-07-12 10:58:25,793 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:25,803 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:58:25,803 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,806 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:25,812 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:58:25,813 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:25,814 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.hbase-snapshot/.tmp 2023-07-12 10:58:25,815 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:58:25,815 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:58:25,816 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:58:25,816 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:25,816 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:58:25,817 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 10:58:25,818 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:25,835 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:25,835 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:25,835 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:25,835 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:25,835 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:25,835 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:25,836 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:25,837 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:25,837 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:58:25,837 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,837 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:25,837 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159535839 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:58:25,839 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:58:25,840 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,840 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:25,840 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 10:58:25,841 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:58:25,841 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:58:25,841 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:58:25,841 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:58:25,842 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:58:25,842 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:25,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159505842,5,FailOnTimeoutGroup] 2023-07-12 10:58:25,848 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159505848,5,FailOnTimeoutGroup] 2023-07-12 10:58:25,848 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,848 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:58:25,848 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,848 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,859 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:25,860 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:25,860 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06 2023-07-12 10:58:25,867 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(951): ClusterId : 9b0b595c-53d7-4d15-8148-b0bb3bb9ca96 2023-07-12 10:58:25,867 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(951): ClusterId : 9b0b595c-53d7-4d15-8148-b0bb3bb9ca96 2023-07-12 10:58:25,870 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:25,870 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:25,867 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(951): ClusterId : 9b0b595c-53d7-4d15-8148-b0bb3bb9ca96 2023-07-12 10:58:25,870 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:25,872 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:25,872 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:25,873 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:25,873 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:25,873 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:25,873 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:25,876 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:25,876 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:25,876 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:25,881 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ReadOnlyZKClient(139): Connect 0x50711acc to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:25,881 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ReadOnlyZKClient(139): Connect 0x76a922b0 to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:25,881 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ReadOnlyZKClient(139): Connect 0x221c5389 to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:25,893 DEBUG [RS:2;jenkins-hbase9:37051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5eb6eeb2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:25,893 DEBUG [RS:0;jenkins-hbase9:40093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@786ed215, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:25,893 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:25,893 DEBUG [RS:1;jenkins-hbase9:45603] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28e438f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:25,893 DEBUG [RS:2;jenkins-hbase9:37051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f8be489, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:25,893 DEBUG [RS:0;jenkins-hbase9:40093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3953d82e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:25,894 DEBUG [RS:1;jenkins-hbase9:45603] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6de6a381, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:25,900 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:25,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/info 2023-07-12 10:58:25,903 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:25,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:25,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:25,905 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:25,905 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:25,905 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:25,906 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:25,907 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/table 2023-07-12 10:58:25,907 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:25,908 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:45603 2023-07-12 10:58:25,908 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:40093 2023-07-12 10:58:25,908 INFO [RS:1;jenkins-hbase9:45603] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:25,908 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:25,908 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:37051 2023-07-12 10:58:25,908 INFO [RS:1;jenkins-hbase9:45603] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:25,908 INFO [RS:0;jenkins-hbase9:40093] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:25,908 INFO [RS:0;jenkins-hbase9:40093] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:25,908 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:25,908 INFO [RS:2;jenkins-hbase9:37051] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:25,908 INFO [RS:2;jenkins-hbase9:37051] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:25,908 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:25,908 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:25,909 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740 2023-07-12 10:58:25,909 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43287,1689159505012 with isa=jenkins-hbase9.apache.org/172.31.2.10:45603, startcode=1689159505357 2023-07-12 10:58:25,909 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43287,1689159505012 with isa=jenkins-hbase9.apache.org/172.31.2.10:40093, startcode=1689159505202 2023-07-12 10:58:25,909 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740 2023-07-12 10:58:25,909 DEBUG [RS:0;jenkins-hbase9:40093] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:25,909 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,43287,1689159505012 with isa=jenkins-hbase9.apache.org/172.31.2.10:37051, startcode=1689159505517 2023-07-12 10:58:25,909 DEBUG [RS:1;jenkins-hbase9:45603] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:25,910 DEBUG [RS:2;jenkins-hbase9:37051] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:25,912 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:25,913 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:25,916 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:60237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:25,916 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55157, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:25,917 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58255, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:25,922 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43287] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,922 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:25,923 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:25,923 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43287] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,923 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06 2023-07-12 10:58:25,924 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43287] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,923 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:25,924 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11409330880, jitterRate=0.06257674098014832}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:25,924 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44937 2023-07-12 10:58:25,924 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39113 2023-07-12 10:58:25,924 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:25,924 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06 2023-07-12 10:58:25,924 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:25,924 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:25,924 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44937 2023-07-12 10:58:25,924 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:25,924 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39113 2023-07-12 10:58:25,924 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:25,925 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:25,925 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:25,925 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:25,925 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06 2023-07-12 10:58:25,925 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44937 2023-07-12 10:58:25,925 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39113 2023-07-12 10:58:25,926 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:25,926 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:25,927 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:25,927 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 10:58:25,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:58:25,927 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:25,929 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:58:25,930 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 10:58:25,931 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ZKUtil(162): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,931 WARN [RS:0;jenkins-hbase9:40093] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:25,931 INFO [RS:0;jenkins-hbase9:40093] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:25,931 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ZKUtil(162): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,931 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,931 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ZKUtil(162): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,931 WARN [RS:1;jenkins-hbase9:45603] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:25,931 WARN [RS:2;jenkins-hbase9:37051] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:25,931 INFO [RS:1;jenkins-hbase9:45603] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:25,931 INFO [RS:2;jenkins-hbase9:37051] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:25,931 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,931 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,932 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,40093,1689159505202] 2023-07-12 10:58:25,932 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,37051,1689159505517] 2023-07-12 10:58:25,932 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,45603,1689159505357] 2023-07-12 10:58:25,939 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ZKUtil(162): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,939 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ZKUtil(162): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,939 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ZKUtil(162): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,939 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ZKUtil(162): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,939 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ZKUtil(162): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,940 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ZKUtil(162): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,940 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ZKUtil(162): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,940 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ZKUtil(162): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,940 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ZKUtil(162): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,941 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:25,941 INFO [RS:0;jenkins-hbase9:40093] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:25,941 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:25,941 DEBUG [RS:1;jenkins-hbase9:45603] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:25,943 INFO [RS:1;jenkins-hbase9:45603] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:25,943 INFO [RS:2;jenkins-hbase9:37051] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:25,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:25,943 INFO [RS:0;jenkins-hbase9:40093] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:25,943 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,945 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:25,945 INFO [RS:1;jenkins-hbase9:45603] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:25,950 INFO [RS:2;jenkins-hbase9:37051] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:25,953 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,954 INFO [RS:1;jenkins-hbase9:45603] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:25,954 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,954 INFO [RS:2;jenkins-hbase9:37051] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:25,954 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,954 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,954 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:25,954 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,955 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:0;jenkins-hbase9:40093] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,956 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,956 DEBUG [RS:2;jenkins-hbase9:37051] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,961 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,961 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,962 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,962 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,962 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,962 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,962 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,962 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,963 DEBUG [RS:1;jenkins-hbase9:45603] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:25,968 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,969 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,969 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,969 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,975 INFO [RS:2;jenkins-hbase9:37051] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:25,975 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37051,1689159505517-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,976 INFO [RS:0;jenkins-hbase9:40093] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:25,976 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,40093,1689159505202-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,979 INFO [RS:1;jenkins-hbase9:45603] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:25,980 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,45603,1689159505357-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,986 INFO [RS:2;jenkins-hbase9:37051] regionserver.Replication(203): jenkins-hbase9.apache.org,37051,1689159505517 started 2023-07-12 10:58:25,986 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,37051,1689159505517, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:37051, sessionid=0x101592157fd0003 2023-07-12 10:58:25,986 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:25,986 DEBUG [RS:2;jenkins-hbase9:37051] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,986 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,37051,1689159505517' 2023-07-12 10:58:25,986 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:25,986 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,37051,1689159505517' 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:25,987 DEBUG [RS:2;jenkins-hbase9:37051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:25,988 DEBUG [RS:2;jenkins-hbase9:37051] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:25,988 INFO [RS:2;jenkins-hbase9:37051] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:25,988 INFO [RS:0;jenkins-hbase9:40093] regionserver.Replication(203): jenkins-hbase9.apache.org,40093,1689159505202 started 2023-07-12 10:58:25,988 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,40093,1689159505202, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:40093, sessionid=0x101592157fd0001 2023-07-12 10:58:25,988 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:25,988 DEBUG [RS:0;jenkins-hbase9:40093] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,988 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40093,1689159505202' 2023-07-12 10:58:25,988 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40093,1689159505202' 2023-07-12 10:58:25,989 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:25,990 DEBUG [RS:0;jenkins-hbase9:40093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:25,990 DEBUG [RS:0;jenkins-hbase9:40093] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:25,990 INFO [RS:0;jenkins-hbase9:40093] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:25,990 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,990 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,991 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ZKUtil(398): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:25,991 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ZKUtil(398): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:25,991 INFO [RS:2;jenkins-hbase9:37051] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:25,991 INFO [RS:0;jenkins-hbase9:40093] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:25,991 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,991 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,992 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,992 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,993 INFO [RS:1;jenkins-hbase9:45603] regionserver.Replication(203): jenkins-hbase9.apache.org,45603,1689159505357 started 2023-07-12 10:58:25,993 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,45603,1689159505357, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:45603, sessionid=0x101592157fd0002 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45603,1689159505357' 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45603,1689159505357' 2023-07-12 10:58:25,994 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:25,995 DEBUG [RS:1;jenkins-hbase9:45603] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:25,995 DEBUG [RS:1;jenkins-hbase9:45603] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:25,995 INFO [RS:1;jenkins-hbase9:45603] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 10:58:25,995 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,995 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ZKUtil(398): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 10:58:25,995 INFO [RS:1;jenkins-hbase9:45603] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 10:58:25,995 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:25,995 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:26,081 DEBUG [jenkins-hbase9:43287] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:26,083 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40093,1689159505202, state=OPENING 2023-07-12 10:58:26,084 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 10:58:26,087 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:26,087 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:26,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40093,1689159505202}] 2023-07-12 10:58:26,095 INFO [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40093%2C1689159505202, suffix=, logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,40093,1689159505202, archiveDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs, maxLogs=32 2023-07-12 10:58:26,095 INFO [RS:2;jenkins-hbase9:37051] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C37051%2C1689159505517, suffix=, logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,37051,1689159505517, archiveDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs, maxLogs=32 2023-07-12 10:58:26,097 INFO [RS:1;jenkins-hbase9:45603] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C45603%2C1689159505357, suffix=, logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,45603,1689159505357, archiveDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs, maxLogs=32 2023-07-12 10:58:26,118 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK] 2023-07-12 10:58:26,119 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK] 2023-07-12 10:58:26,120 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK] 2023-07-12 10:58:26,120 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK] 2023-07-12 10:58:26,121 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK] 2023-07-12 10:58:26,121 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK] 2023-07-12 10:58:26,128 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK] 2023-07-12 10:58:26,128 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK] 2023-07-12 10:58:26,128 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK] 2023-07-12 10:58:26,129 WARN [ReadOnlyZKClient-127.0.0.1:50934@0x750a48ff] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:58:26,129 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:26,131 INFO [RS:1;jenkins-hbase9:45603] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,45603,1689159505357/jenkins-hbase9.apache.org%2C45603%2C1689159505357.1689159506098 2023-07-12 10:58:26,133 INFO [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,40093,1689159505202/jenkins-hbase9.apache.org%2C40093%2C1689159505202.1689159506097 2023-07-12 10:58:26,139 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58918, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:26,139 DEBUG [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK], DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK], DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK]] 2023-07-12 10:58:26,139 DEBUG [RS:1;jenkins-hbase9:45603] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK], DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK], DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK]] 2023-07-12 10:58:26,139 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40093] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:58918 deadline: 1689159566139, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:26,139 INFO [RS:2;jenkins-hbase9:37051] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,37051,1689159505517/jenkins-hbase9.apache.org%2C37051%2C1689159505517.1689159506098 2023-07-12 10:58:26,140 DEBUG [RS:2;jenkins-hbase9:37051] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK], DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK]] 2023-07-12 10:58:26,241 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:26,243 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:26,246 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58928, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:26,251 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:26,251 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:26,252 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40093%2C1689159505202.meta, suffix=.meta, logDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,40093,1689159505202, archiveDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs, maxLogs=32 2023-07-12 10:58:26,267 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK] 2023-07-12 10:58:26,268 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK] 2023-07-12 10:58:26,269 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK] 2023-07-12 10:58:26,272 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/WALs/jenkins-hbase9.apache.org,40093,1689159505202/jenkins-hbase9.apache.org%2C40093%2C1689159505202.meta.1689159506253.meta 2023-07-12 10:58:26,273 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39071,DS-5fba7ea9-c704-49e2-8cb4-8675f24d6618,DISK], DatanodeInfoWithStorage[127.0.0.1:41919,DS-bc6ca73f-d318-4496-9017-485aa892364e,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-b4e92f81-f9f9-454c-8bb3-e70c7d1d5f97,DISK]] 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:26,274 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:26,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:26,275 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:26,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/info 2023-07-12 10:58:26,277 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/info 2023-07-12 10:58:26,277 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:26,277 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:26,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:26,278 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:26,278 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:26,279 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:26,279 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:26,279 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:26,280 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/table 2023-07-12 10:58:26,280 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/table 2023-07-12 10:58:26,280 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:26,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:26,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740 2023-07-12 10:58:26,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740 2023-07-12 10:58:26,284 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:26,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:26,286 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10913637280, jitterRate=0.0164116770029068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:26,286 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:26,287 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689159506241 2023-07-12 10:58:26,291 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:26,291 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:26,292 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40093,1689159505202, state=OPEN 2023-07-12 10:58:26,293 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:26,293 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:26,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 10:58:26,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40093,1689159505202 in 206 msec 2023-07-12 10:58:26,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 10:58:26,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 367 msec 2023-07-12 10:58:26,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 479 msec 2023-07-12 10:58:26,298 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159506298, completionTime=-1 2023-07-12 10:58:26,298 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 10:58:26,298 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:58:26,301 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:58:26,301 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159566301 2023-07-12 10:58:26,301 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159626301 2023-07-12 10:58:26,301 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-12 10:58:26,306 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43287,1689159505012-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,306 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43287,1689159505012-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,307 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43287,1689159505012-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,307 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:43287, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,307 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,307 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 10:58:26,307 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:26,307 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 10:58:26,308 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 10:58:26,309 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:26,309 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:26,311 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,311 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a empty. 2023-07-12 10:58:26,311 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,311 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 10:58:26,323 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:26,325 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 82d53331a6175c62c9b54e724df82f3a, NAME => 'hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp 2023-07-12 10:58:26,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 82d53331a6175c62c9b54e724df82f3a, disabling compactions & flushes 2023-07-12 10:58:26,336 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. after waiting 0 ms 2023-07-12 10:58:26,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,337 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 82d53331a6175c62c9b54e724df82f3a: 2023-07-12 10:58:26,339 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:26,340 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159506340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159506340"}]},"ts":"1689159506340"} 2023-07-12 10:58:26,342 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:26,343 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:26,343 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506343"}]},"ts":"1689159506343"} 2023-07-12 10:58:26,344 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 10:58:26,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:26,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:26,352 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:26,352 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:26,352 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:26,352 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=82d53331a6175c62c9b54e724df82f3a, ASSIGN}] 2023-07-12 10:58:26,355 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=82d53331a6175c62c9b54e724df82f3a, ASSIGN 2023-07-12 10:58:26,356 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=82d53331a6175c62c9b54e724df82f3a, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,40093,1689159505202; forceNewPlan=false, retain=false 2023-07-12 10:58:26,442 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:26,444 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:58:26,445 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:26,446 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:26,447 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,448 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4 empty. 2023-07-12 10:58:26,448 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,448 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 10:58:26,460 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:26,461 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d11fc6d8febafd447b6d1ce1b23580f4, NAME => 'hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d11fc6d8febafd447b6d1ce1b23580f4, disabling compactions & flushes 2023-07-12 10:58:26,471 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. after waiting 0 ms 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,471 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,471 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d11fc6d8febafd447b6d1ce1b23580f4: 2023-07-12 10:58:26,473 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:26,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159506474"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159506474"}]},"ts":"1689159506474"} 2023-07-12 10:58:26,475 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:26,476 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:26,476 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506476"}]},"ts":"1689159506476"} 2023-07-12 10:58:26,477 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 10:58:26,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:26,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:26,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:26,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:26,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:26,480 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d11fc6d8febafd447b6d1ce1b23580f4, ASSIGN}] 2023-07-12 10:58:26,481 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d11fc6d8febafd447b6d1ce1b23580f4, ASSIGN 2023-07-12 10:58:26,481 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d11fc6d8febafd447b6d1ce1b23580f4, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37051,1689159505517; forceNewPlan=false, retain=false 2023-07-12 10:58:26,482 INFO [jenkins-hbase9:43287] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:26,483 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=82d53331a6175c62c9b54e724df82f3a, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:26,484 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159506483"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159506483"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159506483"}]},"ts":"1689159506483"} 2023-07-12 10:58:26,484 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d11fc6d8febafd447b6d1ce1b23580f4, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:26,484 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159506484"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159506484"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159506484"}]},"ts":"1689159506484"} 2023-07-12 10:58:26,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 82d53331a6175c62c9b54e724df82f3a, server=jenkins-hbase9.apache.org,40093,1689159505202}] 2023-07-12 10:58:26,487 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure d11fc6d8febafd447b6d1ce1b23580f4, server=jenkins-hbase9.apache.org,37051,1689159505517}] 2023-07-12 10:58:26,641 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:26,641 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:26,644 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34192, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:26,646 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,646 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 82d53331a6175c62c9b54e724df82f3a, NAME => 'hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:26,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,650 INFO [StoreOpener-82d53331a6175c62c9b54e724df82f3a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,651 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d11fc6d8febafd447b6d1ce1b23580f4, NAME => 'hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:26,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:26,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. service=MultiRowMutationService 2023-07-12 10:58:26,652 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:26,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,652 DEBUG [StoreOpener-82d53331a6175c62c9b54e724df82f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/info 2023-07-12 10:58:26,652 DEBUG [StoreOpener-82d53331a6175c62c9b54e724df82f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/info 2023-07-12 10:58:26,653 INFO [StoreOpener-82d53331a6175c62c9b54e724df82f3a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 82d53331a6175c62c9b54e724df82f3a columnFamilyName info 2023-07-12 10:58:26,653 INFO [StoreOpener-82d53331a6175c62c9b54e724df82f3a-1] regionserver.HStore(310): Store=82d53331a6175c62c9b54e724df82f3a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:26,654 INFO [StoreOpener-d11fc6d8febafd447b6d1ce1b23580f4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,654 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,655 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,655 DEBUG [StoreOpener-d11fc6d8febafd447b6d1ce1b23580f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/m 2023-07-12 10:58:26,656 DEBUG [StoreOpener-d11fc6d8febafd447b6d1ce1b23580f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/m 2023-07-12 10:58:26,656 INFO [StoreOpener-d11fc6d8febafd447b6d1ce1b23580f4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d11fc6d8febafd447b6d1ce1b23580f4 columnFamilyName m 2023-07-12 10:58:26,657 INFO [StoreOpener-d11fc6d8febafd447b6d1ce1b23580f4-1] regionserver.HStore(310): Store=d11fc6d8febafd447b6d1ce1b23580f4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:26,658 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,658 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,661 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:26,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:26,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:26,665 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened d11fc6d8febafd447b6d1ce1b23580f4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7b9d3a2f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:26,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for d11fc6d8febafd447b6d1ce1b23580f4: 2023-07-12 10:58:26,666 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4., pid=9, masterSystemTime=1689159506641 2023-07-12 10:58:26,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:26,670 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 82d53331a6175c62c9b54e724df82f3a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11166820480, jitterRate=0.03999119997024536}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:26,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 82d53331a6175c62c9b54e724df82f3a: 2023-07-12 10:58:26,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,670 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d11fc6d8febafd447b6d1ce1b23580f4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:26,671 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159506670"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159506670"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159506670"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159506670"}]},"ts":"1689159506670"} 2023-07-12 10:58:26,672 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:26,673 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a., pid=8, masterSystemTime=1689159506638 2023-07-12 10:58:26,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:26,675 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=82d53331a6175c62c9b54e724df82f3a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:26,675 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159506675"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159506675"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159506675"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159506675"}]},"ts":"1689159506675"} 2023-07-12 10:58:26,676 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 10:58:26,676 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure d11fc6d8febafd447b6d1ce1b23580f4, server=jenkins-hbase9.apache.org,37051,1689159505517 in 187 msec 2023-07-12 10:58:26,677 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 10:58:26,678 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d11fc6d8febafd447b6d1ce1b23580f4, ASSIGN in 196 msec 2023-07-12 10:58:26,678 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:26,678 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506678"}]},"ts":"1689159506678"} 2023-07-12 10:58:26,679 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 10:58:26,679 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 82d53331a6175c62c9b54e724df82f3a, server=jenkins-hbase9.apache.org,40093,1689159505202 in 192 msec 2023-07-12 10:58:26,680 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 10:58:26,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 10:58:26,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=82d53331a6175c62c9b54e724df82f3a, ASSIGN in 327 msec 2023-07-12 10:58:26,690 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:26,690 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506690"}]},"ts":"1689159506690"} 2023-07-12 10:58:26,690 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:26,694 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 248 msec 2023-07-12 10:58:26,698 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 10:58:26,700 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:26,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 393 msec 2023-07-12 10:58:26,708 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 10:58:26,710 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:26,710 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:26,717 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:26,725 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:26,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-07-12 10:58:26,739 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:26,747 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:26,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:26,750 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:26,754 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:26,754 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:26,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-07-12 10:58:26,758 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:26,758 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:26,761 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:26,763 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,43287,1689159505012] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:26,763 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:26,767 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:26,767 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.090sec 2023-07-12 10:58:26,769 DEBUG [Listener at localhost/44365] zookeeper.ReadOnlyZKClient(139): Connect 0x02953a5e to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:26,769 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 10:58:26,770 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:26,774 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 10:58:26,774 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 10:58:26,775 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:26,776 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:26,777 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 10:58:26,778 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:26,779 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68 empty. 2023-07-12 10:58:26,779 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:26,779 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 10:58:26,782 DEBUG [Listener at localhost/44365] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28bc48f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:26,786 DEBUG [hconnection-0x2959891b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:26,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 10:58:26,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 10:58:26,789 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:26,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:26,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:26,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43287,1689159505012-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:26,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43287,1689159505012-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:26,791 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58944, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:26,792 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:26,793 INFO [Listener at localhost/44365] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:26,799 DEBUG [Listener at localhost/44365] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:26,805 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:26,805 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:53480, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:26,809 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:26,809 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:26,810 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:26,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:26,811 DEBUG [Listener at localhost/44365] zookeeper.ReadOnlyZKClient(139): Connect 0x649dfbc5 to 127.0.0.1:50934 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:26,812 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => e918aa15b9e12c48ddb2735bef80dd68, NAME => 'hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp 2023-07-12 10:58:26,817 DEBUG [Listener at localhost/44365] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@169a71c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:26,817 INFO [Listener at localhost/44365] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50934 2023-07-12 10:58:26,824 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:26,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 10:58:26,831 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101592157fd000a connected 2023-07-12 10:58:26,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing e918aa15b9e12c48ddb2735bef80dd68, disabling compactions & flushes 2023-07-12 10:58:26,835 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. after waiting 0 ms 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:26,835 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:26,835 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for e918aa15b9e12c48ddb2735bef80dd68: 2023-07-12 10:58:26,838 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:26,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 10:58:26,841 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159506841"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159506841"}]},"ts":"1689159506841"} 2023-07-12 10:58:26,842 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:26,844 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:26,847 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:26,847 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506847"}]},"ts":"1689159506847"} 2023-07-12 10:58:26,848 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 10:58:26,849 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 18 msec 2023-07-12 10:58:26,852 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:26,852 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:26,852 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:26,852 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:26,852 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:26,852 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e918aa15b9e12c48ddb2735bef80dd68, ASSIGN}] 2023-07-12 10:58:26,853 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e918aa15b9e12c48ddb2735bef80dd68, ASSIGN 2023-07-12 10:58:26,854 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=e918aa15b9e12c48ddb2735bef80dd68, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,40093,1689159505202; forceNewPlan=false, retain=false 2023-07-12 10:58:26,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 10:58:26,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:26,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 10:58:26,948 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:26,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-12 10:58:26,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:26,950 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:26,950 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:26,952 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:26,954 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:26,954 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf empty. 2023-07-12 10:58:26,955 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:26,956 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 10:58:26,974 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:26,975 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0706bf454afd24a89ba5da9b7aadccbf, NAME => 'np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp 2023-07-12 10:58:26,984 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:26,984 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 0706bf454afd24a89ba5da9b7aadccbf, disabling compactions & flushes 2023-07-12 10:58:26,984 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:26,984 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:26,985 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. after waiting 0 ms 2023-07-12 10:58:26,985 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:26,985 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:26,985 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 0706bf454afd24a89ba5da9b7aadccbf: 2023-07-12 10:58:26,987 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:26,988 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159506988"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159506988"}]},"ts":"1689159506988"} 2023-07-12 10:58:26,989 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:26,990 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:26,990 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159506990"}]},"ts":"1689159506990"} 2023-07-12 10:58:26,991 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 10:58:26,997 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:26,997 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:26,997 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:26,997 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:26,997 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:26,997 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, ASSIGN}] 2023-07-12 10:58:26,998 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, ASSIGN 2023-07-12 10:58:26,999 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,37051,1689159505517; forceNewPlan=false, retain=false 2023-07-12 10:58:27,004 INFO [jenkins-hbase9:43287] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 10:58:27,006 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=e918aa15b9e12c48ddb2735bef80dd68, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:27,007 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159507006"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159507006"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159507006"}]},"ts":"1689159507006"} 2023-07-12 10:58:27,006 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=0706bf454afd24a89ba5da9b7aadccbf, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:27,007 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159507006"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159507006"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159507006"}]},"ts":"1689159507006"} 2023-07-12 10:58:27,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; OpenRegionProcedure e918aa15b9e12c48ddb2735bef80dd68, server=jenkins-hbase9.apache.org,40093,1689159505202}] 2023-07-12 10:58:27,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 0706bf454afd24a89ba5da9b7aadccbf, server=jenkins-hbase9.apache.org,37051,1689159505517}] 2023-07-12 10:58:27,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:27,165 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:27,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e918aa15b9e12c48ddb2735bef80dd68, NAME => 'hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:27,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:27,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,167 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0706bf454afd24a89ba5da9b7aadccbf, NAME => 'np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:27,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:27,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,169 DEBUG [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/q 2023-07-12 10:58:27,169 DEBUG [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/q 2023-07-12 10:58:27,170 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e918aa15b9e12c48ddb2735bef80dd68 columnFamilyName q 2023-07-12 10:58:27,171 INFO [StoreOpener-0706bf454afd24a89ba5da9b7aadccbf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,171 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] regionserver.HStore(310): Store=e918aa15b9e12c48ddb2735bef80dd68/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,171 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,173 DEBUG [StoreOpener-0706bf454afd24a89ba5da9b7aadccbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/fam1 2023-07-12 10:58:27,173 DEBUG [StoreOpener-0706bf454afd24a89ba5da9b7aadccbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/fam1 2023-07-12 10:58:27,173 INFO [StoreOpener-0706bf454afd24a89ba5da9b7aadccbf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0706bf454afd24a89ba5da9b7aadccbf columnFamilyName fam1 2023-07-12 10:58:27,174 INFO [StoreOpener-0706bf454afd24a89ba5da9b7aadccbf-1] regionserver.HStore(310): Store=0706bf454afd24a89ba5da9b7aadccbf/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,176 DEBUG [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/u 2023-07-12 10:58:27,176 DEBUG [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/u 2023-07-12 10:58:27,177 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e918aa15b9e12c48ddb2735bef80dd68 columnFamilyName u 2023-07-12 10:58:27,178 INFO [StoreOpener-e918aa15b9e12c48ddb2735bef80dd68-1] regionserver.HStore(310): Store=e918aa15b9e12c48ddb2735bef80dd68/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:27,179 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,179 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 10:58:27,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:27,184 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0706bf454afd24a89ba5da9b7aadccbf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10290931680, jitterRate=-0.04158230125904083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:27,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0706bf454afd24a89ba5da9b7aadccbf: 2023-07-12 10:58:27,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf., pid=18, masterSystemTime=1689159507164 2023-07-12 10:58:27,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,187 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=0706bf454afd24a89ba5da9b7aadccbf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:27,187 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159507187"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159507187"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159507187"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159507187"}]},"ts":"1689159507187"} 2023-07-12 10:58:27,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:27,188 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e918aa15b9e12c48ddb2735bef80dd68; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9885376480, jitterRate=-0.07935257256031036}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 10:58:27,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e918aa15b9e12c48ddb2735bef80dd68: 2023-07-12 10:58:27,189 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68., pid=17, masterSystemTime=1689159507161 2023-07-12 10:58:27,191 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-12 10:58:27,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:27,191 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 0706bf454afd24a89ba5da9b7aadccbf, server=jenkins-hbase9.apache.org,37051,1689159505517 in 179 msec 2023-07-12 10:58:27,191 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:27,192 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=e918aa15b9e12c48ddb2735bef80dd68, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:27,192 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689159507191"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159507191"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159507191"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159507191"}]},"ts":"1689159507191"} 2023-07-12 10:58:27,193 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-12 10:58:27,193 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, ASSIGN in 194 msec 2023-07-12 10:58:27,194 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:27,194 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159507194"}]},"ts":"1689159507194"} 2023-07-12 10:58:27,195 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 10:58:27,195 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-12 10:58:27,195 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; OpenRegionProcedure e918aa15b9e12c48ddb2735bef80dd68, server=jenkins-hbase9.apache.org,40093,1689159505202 in 184 msec 2023-07-12 10:58:27,197 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 10:58:27,197 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=e918aa15b9e12c48ddb2735bef80dd68, ASSIGN in 343 msec 2023-07-12 10:58:27,198 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:27,198 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:27,198 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159507198"}]},"ts":"1689159507198"} 2023-07-12 10:58:27,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 253 msec 2023-07-12 10:58:27,199 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 10:58:27,201 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:27,202 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 431 msec 2023-07-12 10:58:27,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 10:58:27,251 INFO [Listener at localhost/44365] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-12 10:58:27,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:27,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 10:58:27,258 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:27,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 10:58:27,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 10:58:27,283 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=29 msec 2023-07-12 10:58:27,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 10:58:27,364 INFO [Listener at localhost/44365] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 10:58:27,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:27,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:27,367 INFO [Listener at localhost/44365] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 10:58:27,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable np1:table1 2023-07-12 10:58:27,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 10:58:27,379 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159507378"}]},"ts":"1689159507378"} 2023-07-12 10:58:27,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 10:58:27,380 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 10:58:27,382 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 10:58:27,383 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, UNASSIGN}] 2023-07-12 10:58:27,384 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, UNASSIGN 2023-07-12 10:58:27,384 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0706bf454afd24a89ba5da9b7aadccbf, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:27,385 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159507384"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159507384"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159507384"}]},"ts":"1689159507384"} 2023-07-12 10:58:27,386 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 0706bf454afd24a89ba5da9b7aadccbf, server=jenkins-hbase9.apache.org,37051,1689159505517}] 2023-07-12 10:58:27,461 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 10:58:27,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 10:58:27,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0706bf454afd24a89ba5da9b7aadccbf, disabling compactions & flushes 2023-07-12 10:58:27,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. after waiting 0 ms 2023-07-12 10:58:27,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:27,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf. 2023-07-12 10:58:27,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0706bf454afd24a89ba5da9b7aadccbf: 2023-07-12 10:58:27,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,603 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0706bf454afd24a89ba5da9b7aadccbf, regionState=CLOSED 2023-07-12 10:58:27,604 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159507603"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159507603"}]},"ts":"1689159507603"} 2023-07-12 10:58:27,609 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 10:58:27,609 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 0706bf454afd24a89ba5da9b7aadccbf, server=jenkins-hbase9.apache.org,37051,1689159505517 in 219 msec 2023-07-12 10:58:27,611 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 10:58:27,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0706bf454afd24a89ba5da9b7aadccbf, UNASSIGN in 226 msec 2023-07-12 10:58:27,613 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159507613"}]},"ts":"1689159507613"} 2023-07-12 10:58:27,615 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 10:58:27,619 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 10:58:27,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 253 msec 2023-07-12 10:58:27,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 10:58:27,684 INFO [Listener at localhost/44365] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 10:58:27,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete np1:table1 2023-07-12 10:58:27,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,687 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 10:58:27,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:27,690 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:27,693 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:27,695 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/fam1, FileablePath, hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/recovered.edits] 2023-07-12 10:58:27,701 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/recovered.edits/4.seqid to hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/archive/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf/recovered.edits/4.seqid 2023-07-12 10:58:27,702 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/.tmp/data/np1/table1/0706bf454afd24a89ba5da9b7aadccbf 2023-07-12 10:58:27,702 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 10:58:27,704 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,706 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 10:58:27,708 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 10:58:27,709 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,709 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 10:58:27,709 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159507709"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:27,711 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:27,711 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0706bf454afd24a89ba5da9b7aadccbf, NAME => 'np1:table1,,1689159506944.0706bf454afd24a89ba5da9b7aadccbf.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:27,711 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 10:58:27,711 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159507711"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:27,712 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 10:58:27,714 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 10:58:27,715 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-12 10:58:27,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 10:58:27,796 INFO [Listener at localhost/44365] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 10:58:27,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete np1 2023-07-12 10:58:27,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,811 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,815 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,817 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 10:58:27,819 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 10:58:27,819 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:27,820 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,822 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 10:58:27,829 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 20 msec 2023-07-12 10:58:27,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43287] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 10:58:27,919 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 10:58:27,920 INFO [Listener at localhost/44365] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02953a5e to 127.0.0.1:50934 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] util.JVMClusterUtil(257): Found active master hash=1347300919, stopped=false 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:27,920 DEBUG [Listener at localhost/44365] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 10:58:27,920 INFO [Listener at localhost/44365] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:27,922 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,922 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,922 INFO [Listener at localhost/44365] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:27,922 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:27,922 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,922 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:27,923 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:27,923 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:27,923 DEBUG [Listener at localhost/44365] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x750a48ff to 127.0.0.1:50934 2023-07-12 10:58:27,923 DEBUG [Listener at localhost/44365] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,40093,1689159505202' ***** 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,45603,1689159505357' ***** 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,37051,1689159505517' ***** 2023-07-12 10:58:27,924 INFO [Listener at localhost/44365] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:27,924 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:27,924 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:27,924 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:27,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:27,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:27,934 INFO [RS:2;jenkins-hbase9:37051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@543f114d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:27,934 INFO [RS:0;jenkins-hbase9:40093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@230e1553{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:27,934 INFO [RS:1;jenkins-hbase9:45603] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a24290f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:27,934 INFO [RS:2;jenkins-hbase9:37051] server.AbstractConnector(383): Stopped ServerConnector@6bbc580d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:27,934 INFO [RS:2;jenkins-hbase9:37051] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:27,934 INFO [RS:0;jenkins-hbase9:40093] server.AbstractConnector(383): Stopped ServerConnector@4c16c29d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:27,934 INFO [RS:1;jenkins-hbase9:45603] server.AbstractConnector(383): Stopped ServerConnector@727af871{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:27,935 INFO [RS:2;jenkins-hbase9:37051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6df3a6c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:27,935 INFO [RS:0;jenkins-hbase9:40093] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:27,937 INFO [RS:2;jenkins-hbase9:37051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@500bceaf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:27,935 INFO [RS:1;jenkins-hbase9:45603] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:27,937 INFO [RS:0;jenkins-hbase9:40093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2dcd6b23{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:27,938 INFO [RS:1;jenkins-hbase9:45603] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e68d44a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:27,938 INFO [RS:0;jenkins-hbase9:40093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5a43795a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:27,938 INFO [RS:1;jenkins-hbase9:45603] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b118b49{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:27,938 INFO [RS:2;jenkins-hbase9:37051] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:27,938 INFO [RS:2;jenkins-hbase9:37051] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:27,938 INFO [RS:2;jenkins-hbase9:37051] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:27,938 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(3305): Received CLOSE for d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:27,938 INFO [RS:1;jenkins-hbase9:45603] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:27,938 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:27,938 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:27,938 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:27,938 INFO [RS:1;jenkins-hbase9:45603] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:27,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing d11fc6d8febafd447b6d1ce1b23580f4, disabling compactions & flushes 2023-07-12 10:58:27,938 DEBUG [RS:2;jenkins-hbase9:37051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50711acc to 127.0.0.1:50934 2023-07-12 10:58:27,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:27,940 INFO [RS:1;jenkins-hbase9:45603] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:27,940 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:27,940 DEBUG [RS:2;jenkins-hbase9:37051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. after waiting 0 ms 2023-07-12 10:58:27,940 DEBUG [RS:1;jenkins-hbase9:45603] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x221c5389 to 127.0.0.1:50934 2023-07-12 10:58:27,940 INFO [RS:0;jenkins-hbase9:40093] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:27,941 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:27,941 INFO [RS:0;jenkins-hbase9:40093] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:27,940 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:27,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing d11fc6d8febafd447b6d1ce1b23580f4 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-12 10:58:27,941 INFO [RS:0;jenkins-hbase9:40093] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:27,941 DEBUG [RS:1;jenkins-hbase9:45603] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,941 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(3305): Received CLOSE for 82d53331a6175c62c9b54e724df82f3a 2023-07-12 10:58:27,941 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1478): Online Regions={d11fc6d8febafd447b6d1ce1b23580f4=hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4.} 2023-07-12 10:58:27,942 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,45603,1689159505357; all regions closed. 2023-07-12 10:58:27,942 DEBUG [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1504): Waiting on d11fc6d8febafd447b6d1ce1b23580f4 2023-07-12 10:58:27,942 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(3305): Received CLOSE for e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,942 DEBUG [RS:1;jenkins-hbase9:45603] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:27,942 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:27,942 DEBUG [RS:0;jenkins-hbase9:40093] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x76a922b0 to 127.0.0.1:50934 2023-07-12 10:58:27,943 DEBUG [RS:0;jenkins-hbase9:40093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 82d53331a6175c62c9b54e724df82f3a, disabling compactions & flushes 2023-07-12 10:58:27,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:27,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:27,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:27,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:27,943 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-12 10:58:27,944 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 82d53331a6175c62c9b54e724df82f3a=hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a., e918aa15b9e12c48ddb2735bef80dd68=hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68.} 2023-07-12 10:58:27,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:27,944 DEBUG [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1504): Waiting on 1588230740, 82d53331a6175c62c9b54e724df82f3a, e918aa15b9e12c48ddb2735bef80dd68 2023-07-12 10:58:27,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:27,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. after waiting 0 ms 2023-07-12 10:58:27,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:27,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 82d53331a6175c62c9b54e724df82f3a 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 10:58:27,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:27,945 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:27,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:27,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:27,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:27,947 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-12 10:58:27,951 DEBUG [RS:1;jenkins-hbase9:45603] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C45603%2C1689159505357:(num 1689159506098) 2023-07-12 10:58:27,952 DEBUG [RS:1;jenkins-hbase9:45603] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:27,952 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:27,952 INFO [RS:1;jenkins-hbase9:45603] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:27,954 INFO [RS:1;jenkins-hbase9:45603] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:45603 2023-07-12 10:58:27,962 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:27,962 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:27,964 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:27,964 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:27,968 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:27,968 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:27,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/.tmp/m/6b8e57a133e24d42ba739f29b96ddf6f 2023-07-12 10:58:27,973 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:27,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/.tmp/info/70d88ec33d9749d285ca1cd81b771241 2023-07-12 10:58:27,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/.tmp/m/6b8e57a133e24d42ba739f29b96ddf6f as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/m/6b8e57a133e24d42ba739f29b96ddf6f 2023-07-12 10:58:27,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/info/0e175658ec7548e7a3d10e1a05f00d31 2023-07-12 10:58:27,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70d88ec33d9749d285ca1cd81b771241 2023-07-12 10:58:27,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/.tmp/info/70d88ec33d9749d285ca1cd81b771241 as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/info/70d88ec33d9749d285ca1cd81b771241 2023-07-12 10:58:27,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/m/6b8e57a133e24d42ba739f29b96ddf6f, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 10:58:27,989 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e175658ec7548e7a3d10e1a05f00d31 2023-07-12 10:58:27,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for d11fc6d8febafd447b6d1ce1b23580f4 in 48ms, sequenceid=7, compaction requested=false 2023-07-12 10:58:27,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 10:58:27,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/rsgroup/d11fc6d8febafd447b6d1ce1b23580f4/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 10:58:27,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70d88ec33d9749d285ca1cd81b771241 2023-07-12 10:58:27,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/info/70d88ec33d9749d285ca1cd81b771241, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 10:58:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:27,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for d11fc6d8febafd447b6d1ce1b23580f4: 2023-07-12 10:58:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159506442.d11fc6d8febafd447b6d1ce1b23580f4. 2023-07-12 10:58:27,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 82d53331a6175c62c9b54e724df82f3a in 54ms, sequenceid=8, compaction requested=false 2023-07-12 10:58:27,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 10:58:28,010 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/rep_barrier/88d968f1e67947d4afb638f4d1f56b3b 2023-07-12 10:58:28,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/namespace/82d53331a6175c62c9b54e724df82f3a/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 10:58:28,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:28,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 82d53331a6175c62c9b54e724df82f3a: 2023-07-12 10:58:28,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159506307.82d53331a6175c62c9b54e724df82f3a. 2023-07-12 10:58:28,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e918aa15b9e12c48ddb2735bef80dd68, disabling compactions & flushes 2023-07-12 10:58:28,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:28,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:28,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. after waiting 0 ms 2023-07-12 10:58:28,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:28,018 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 88d968f1e67947d4afb638f4d1f56b3b 2023-07-12 10:58:28,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/quota/e918aa15b9e12c48ddb2735bef80dd68/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:28,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e918aa15b9e12c48ddb2735bef80dd68: 2023-07-12 10:58:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689159506770.e918aa15b9e12c48ddb2735bef80dd68. 2023-07-12 10:58:28,036 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/table/f7058b55a417497bb058e5f1d764f277 2023-07-12 10:58:28,046 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:28,046 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,046 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:28,046 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,046 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,047 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45603,1689159505357 2023-07-12 10:58:28,047 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,048 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7058b55a417497bb058e5f1d764f277 2023-07-12 10:58:28,048 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,45603,1689159505357] 2023-07-12 10:58:28,048 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,45603,1689159505357; numProcessing=1 2023-07-12 10:58:28,049 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/info/0e175658ec7548e7a3d10e1a05f00d31 as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/info/0e175658ec7548e7a3d10e1a05f00d31 2023-07-12 10:58:28,050 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,45603,1689159505357 already deleted, retry=false 2023-07-12 10:58:28,050 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,45603,1689159505357 expired; onlineServers=2 2023-07-12 10:58:28,055 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e175658ec7548e7a3d10e1a05f00d31 2023-07-12 10:58:28,056 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/info/0e175658ec7548e7a3d10e1a05f00d31, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 10:58:28,057 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/rep_barrier/88d968f1e67947d4afb638f4d1f56b3b as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/rep_barrier/88d968f1e67947d4afb638f4d1f56b3b 2023-07-12 10:58:28,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 88d968f1e67947d4afb638f4d1f56b3b 2023-07-12 10:58:28,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/rep_barrier/88d968f1e67947d4afb638f4d1f56b3b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 10:58:28,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/.tmp/table/f7058b55a417497bb058e5f1d764f277 as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/table/f7058b55a417497bb058e5f1d764f277 2023-07-12 10:58:28,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7058b55a417497bb058e5f1d764f277 2023-07-12 10:58:28,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/table/f7058b55a417497bb058e5f1d764f277, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 10:58:28,069 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 123ms, sequenceid=31, compaction requested=false 2023-07-12 10:58:28,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 10:58:28,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 10:58:28,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:28,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:28,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:28,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:28,142 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,37051,1689159505517; all regions closed. 2023-07-12 10:58:28,142 DEBUG [RS:2;jenkins-hbase9:37051] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:28,145 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,40093,1689159505202; all regions closed. 2023-07-12 10:58:28,145 DEBUG [RS:0;jenkins-hbase9:40093] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 10:58:28,151 DEBUG [RS:2;jenkins-hbase9:37051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C37051%2C1689159505517:(num 1689159506098) 2023-07-12 10:58:28,151 DEBUG [RS:2;jenkins-hbase9:37051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:28,151 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:28,151 INFO [RS:2;jenkins-hbase9:37051] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:28,152 INFO [RS:2;jenkins-hbase9:37051] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:37051 2023-07-12 10:58:28,155 DEBUG [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs 2023-07-12 10:58:28,155 INFO [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40093%2C1689159505202.meta:.meta(num 1689159506253) 2023-07-12 10:58:28,155 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:28,155 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,37051,1689159505517 2023-07-12 10:58:28,155 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,157 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,37051,1689159505517] 2023-07-12 10:58:28,157 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,37051,1689159505517; numProcessing=2 2023-07-12 10:58:28,159 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,37051,1689159505517 already deleted, retry=false 2023-07-12 10:58:28,160 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,37051,1689159505517 expired; onlineServers=1 2023-07-12 10:58:28,161 DEBUG [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/oldWALs 2023-07-12 10:58:28,161 INFO [RS:0;jenkins-hbase9:40093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40093%2C1689159505202:(num 1689159506097) 2023-07-12 10:58:28,161 DEBUG [RS:0;jenkins-hbase9:40093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:28,162 INFO [RS:0;jenkins-hbase9:40093] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:28,162 INFO [RS:0;jenkins-hbase9:40093] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:28,162 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:28,163 INFO [RS:0;jenkins-hbase9:40093] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:40093 2023-07-12 10:58:28,167 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,40093,1689159505202 2023-07-12 10:58:28,167 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:28,168 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,40093,1689159505202] 2023-07-12 10:58:28,168 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,40093,1689159505202; numProcessing=3 2023-07-12 10:58:28,169 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,40093,1689159505202 already deleted, retry=false 2023-07-12 10:58:28,169 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,40093,1689159505202 expired; onlineServers=0 2023-07-12 10:58:28,169 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43287,1689159505012' ***** 2023-07-12 10:58:28,169 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:28,169 DEBUG [M:0;jenkins-hbase9:43287] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5471b0e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:28,169 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:28,171 INFO [M:0;jenkins-hbase9:43287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@37863041{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:28,171 INFO [M:0;jenkins-hbase9:43287] server.AbstractConnector(383): Stopped ServerConnector@2991b76c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:28,171 INFO [M:0;jenkins-hbase9:43287] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:28,172 INFO [M:0;jenkins-hbase9:43287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d03266b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:28,172 INFO [M:0;jenkins-hbase9:43287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@56f0345d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:28,172 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:28,172 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:28,172 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43287,1689159505012 2023-07-12 10:58:28,172 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43287,1689159505012; all regions closed. 2023-07-12 10:58:28,172 DEBUG [M:0;jenkins-hbase9:43287] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:28,172 INFO [M:0;jenkins-hbase9:43287] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:28,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:28,173 INFO [M:0;jenkins-hbase9:43287] server.AbstractConnector(383): Stopped ServerConnector@5102f4f5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:28,173 DEBUG [M:0;jenkins-hbase9:43287] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:28,173 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:28,173 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159505842] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159505842,5,FailOnTimeoutGroup] 2023-07-12 10:58:28,173 DEBUG [M:0;jenkins-hbase9:43287] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:28,173 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159505848] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159505848,5,FailOnTimeoutGroup] 2023-07-12 10:58:28,174 INFO [M:0;jenkins-hbase9:43287] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:28,175 INFO [M:0;jenkins-hbase9:43287] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:28,175 INFO [M:0;jenkins-hbase9:43287] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:28,175 DEBUG [M:0;jenkins-hbase9:43287] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:28,175 INFO [M:0;jenkins-hbase9:43287] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:28,175 ERROR [M:0;jenkins-hbase9:43287] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 10:58:28,176 INFO [M:0;jenkins-hbase9:43287] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:28,176 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:28,176 DEBUG [M:0;jenkins-hbase9:43287] zookeeper.ZKUtil(398): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:28,176 WARN [M:0;jenkins-hbase9:43287] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:28,176 INFO [M:0;jenkins-hbase9:43287] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:28,176 INFO [M:0;jenkins-hbase9:43287] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:28,177 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:28,177 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:28,177 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:28,177 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:28,177 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:28,177 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-12 10:58:28,188 INFO [M:0;jenkins-hbase9:43287] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cbf69350bd7e4f0b9d9e86f979d25919 2023-07-12 10:58:28,193 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cbf69350bd7e4f0b9d9e86f979d25919 as hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cbf69350bd7e4f0b9d9e86f979d25919 2023-07-12 10:58:28,198 INFO [M:0;jenkins-hbase9:43287] regionserver.HStore(1080): Added hdfs://localhost:44937/user/jenkins/test-data/1e3ca376-6874-4029-5428-ccb868a72b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cbf69350bd7e4f0b9d9e86f979d25919, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 10:58:28,200 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=194, compaction requested=false 2023-07-12 10:58:28,201 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:28,201 DEBUG [M:0;jenkins-hbase9:43287] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:28,205 INFO [M:0;jenkins-hbase9:43287] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:28,205 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:28,205 INFO [M:0;jenkins-hbase9:43287] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43287 2023-07-12 10:58:28,206 DEBUG [M:0;jenkins-hbase9:43287] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,43287,1689159505012 already deleted, retry=false 2023-07-12 10:58:28,422 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,422 INFO [M:0;jenkins-hbase9:43287] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43287,1689159505012; zookeeper connection closed. 2023-07-12 10:58:28,422 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): master:43287-0x101592157fd0000, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,522 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,522 INFO [RS:0;jenkins-hbase9:40093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,40093,1689159505202; zookeeper connection closed. 2023-07-12 10:58:28,522 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:40093-0x101592157fd0001, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,525 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6a7d710] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6a7d710 2023-07-12 10:58:28,623 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,623 INFO [RS:2;jenkins-hbase9:37051] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,37051,1689159505517; zookeeper connection closed. 2023-07-12 10:58:28,623 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:37051-0x101592157fd0003, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,623 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@9292dfa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@9292dfa 2023-07-12 10:58:28,723 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,723 INFO [RS:1;jenkins-hbase9:45603] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,45603,1689159505357; zookeeper connection closed. 2023-07-12 10:58:28,723 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): regionserver:45603-0x101592157fd0002, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:28,724 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@43dec918] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@43dec918 2023-07-12 10:58:28,724 INFO [Listener at localhost/44365] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 10:58:28,724 WARN [Listener at localhost/44365] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:28,728 INFO [Listener at localhost/44365] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:28,833 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:28,833 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-832310857-172.31.2.10-1689159504146 (Datanode Uuid e1b28d60-9778-4343-be30-f4828c7c77c1) service to localhost/127.0.0.1:44937 2023-07-12 10:58:28,834 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data5/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:28,834 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data6/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:28,836 WARN [Listener at localhost/44365] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:28,841 INFO [Listener at localhost/44365] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:28,946 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:28,946 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-832310857-172.31.2.10-1689159504146 (Datanode Uuid 563dfeb3-2d02-44cc-be8a-39ec9cc16707) service to localhost/127.0.0.1:44937 2023-07-12 10:58:28,947 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data3/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:28,947 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data4/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:28,949 WARN [Listener at localhost/44365] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:28,952 INFO [Listener at localhost/44365] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:29,056 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:29,056 WARN [BP-832310857-172.31.2.10-1689159504146 heartbeating to localhost/127.0.0.1:44937] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-832310857-172.31.2.10-1689159504146 (Datanode Uuid 346287a7-7a51-437f-a62b-295e24f1e944) service to localhost/127.0.0.1:44937 2023-07-12 10:58:29,057 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data1/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:29,057 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/cluster_2efd2644-2ec7-28ef-dc22-569402bd4903/dfs/data/data2/current/BP-832310857-172.31.2.10-1689159504146] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:29,068 INFO [Listener at localhost/44365] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:29,191 INFO [Listener at localhost/44365] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 10:58:29,219 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 10:58:29,219 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.log.dir so I do NOT create it in target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/def3674d-6c86-f65a-310b-855514e8ca64/hadoop.tmp.dir so I do NOT create it in target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98, deleteOnExit=true 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/test.cache.data in system properties and HBase conf 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 10:58:29,220 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 10:58:29,221 DEBUG [Listener at localhost/44365] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:58:29,221 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/nfs.dump.dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 10:58:29,222 INFO [Listener at localhost/44365] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 10:58:29,226 WARN [Listener at localhost/44365] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:58:29,226 WARN [Listener at localhost/44365] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:58:29,269 WARN [Listener at localhost/44365] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:29,271 INFO [Listener at localhost/44365] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:29,276 INFO [Listener at localhost/44365] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/Jetty_localhost_38083_hdfs____lu8v9s/webapp 2023-07-12 10:58:29,288 DEBUG [Listener at localhost/44365-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101592157fd000a, quorum=127.0.0.1:50934, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 10:58:29,289 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101592157fd000a, quorum=127.0.0.1:50934, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 10:58:29,368 INFO [Listener at localhost/44365] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38083 2023-07-12 10:58:29,373 WARN [Listener at localhost/44365] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 10:58:29,373 WARN [Listener at localhost/44365] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 10:58:29,418 WARN [Listener at localhost/32939] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:29,433 WARN [Listener at localhost/32939] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:29,435 WARN [Listener at localhost/32939] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:29,436 INFO [Listener at localhost/32939] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:29,441 INFO [Listener at localhost/32939] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/Jetty_localhost_41625_datanode____.sr88u5/webapp 2023-07-12 10:58:29,532 INFO [Listener at localhost/32939] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41625 2023-07-12 10:58:29,538 WARN [Listener at localhost/35235] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:29,553 WARN [Listener at localhost/35235] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:29,554 WARN [Listener at localhost/35235] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:29,555 INFO [Listener at localhost/35235] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:29,559 INFO [Listener at localhost/35235] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/Jetty_localhost_45191_datanode____.d7r1o1/webapp 2023-07-12 10:58:29,635 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b10b392135d352b: Processing first storage report for DS-db684873-dcb0-4ce8-a01c-db32688013f3 from datanode 9bf4664a-adcf-4f06-b2dd-b7ede5a070f8 2023-07-12 10:58:29,635 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b10b392135d352b: from storage DS-db684873-dcb0-4ce8-a01c-db32688013f3 node DatanodeRegistration(127.0.0.1:45413, datanodeUuid=9bf4664a-adcf-4f06-b2dd-b7ede5a070f8, infoPort=41107, infoSecurePort=0, ipcPort=35235, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,635 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b10b392135d352b: Processing first storage report for DS-0a26ca75-2cda-446f-830d-ec190667738e from datanode 9bf4664a-adcf-4f06-b2dd-b7ede5a070f8 2023-07-12 10:58:29,635 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b10b392135d352b: from storage DS-0a26ca75-2cda-446f-830d-ec190667738e node DatanodeRegistration(127.0.0.1:45413, datanodeUuid=9bf4664a-adcf-4f06-b2dd-b7ede5a070f8, infoPort=41107, infoSecurePort=0, ipcPort=35235, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,659 INFO [Listener at localhost/35235] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45191 2023-07-12 10:58:29,668 WARN [Listener at localhost/37717] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:29,684 WARN [Listener at localhost/37717] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 10:58:29,686 WARN [Listener at localhost/37717] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 10:58:29,687 INFO [Listener at localhost/37717] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 10:58:29,690 INFO [Listener at localhost/37717] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/Jetty_localhost_40273_datanode____.a2vz47/webapp 2023-07-12 10:58:29,765 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbea73904add5ed8f: Processing first storage report for DS-2f07e96d-54cc-41c0-b419-84209f6c74b8 from datanode 2250feb9-cc06-4f68-adcc-d5e0e1809ccb 2023-07-12 10:58:29,765 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbea73904add5ed8f: from storage DS-2f07e96d-54cc-41c0-b419-84209f6c74b8 node DatanodeRegistration(127.0.0.1:45911, datanodeUuid=2250feb9-cc06-4f68-adcc-d5e0e1809ccb, infoPort=38607, infoSecurePort=0, ipcPort=37717, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,765 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbea73904add5ed8f: Processing first storage report for DS-084ad780-3b67-4cf5-950f-3cb09c964b9d from datanode 2250feb9-cc06-4f68-adcc-d5e0e1809ccb 2023-07-12 10:58:29,765 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbea73904add5ed8f: from storage DS-084ad780-3b67-4cf5-950f-3cb09c964b9d node DatanodeRegistration(127.0.0.1:45911, datanodeUuid=2250feb9-cc06-4f68-adcc-d5e0e1809ccb, infoPort=38607, infoSecurePort=0, ipcPort=37717, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,795 INFO [Listener at localhost/37717] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40273 2023-07-12 10:58:29,804 WARN [Listener at localhost/34129] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 10:58:29,896 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x71175585384f06a9: Processing first storage report for DS-1e85d00e-90d9-4c42-8e52-102136d4cf89 from datanode 39ccd5f1-2f92-4056-9583-ee03dbf54a67 2023-07-12 10:58:29,896 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x71175585384f06a9: from storage DS-1e85d00e-90d9-4c42-8e52-102136d4cf89 node DatanodeRegistration(127.0.0.1:42553, datanodeUuid=39ccd5f1-2f92-4056-9583-ee03dbf54a67, infoPort=37985, infoSecurePort=0, ipcPort=34129, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,896 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x71175585384f06a9: Processing first storage report for DS-80f0bec5-33fa-40ab-b4f6-559295b33c87 from datanode 39ccd5f1-2f92-4056-9583-ee03dbf54a67 2023-07-12 10:58:29,896 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x71175585384f06a9: from storage DS-80f0bec5-33fa-40ab-b4f6-559295b33c87 node DatanodeRegistration(127.0.0.1:42553, datanodeUuid=39ccd5f1-2f92-4056-9583-ee03dbf54a67, infoPort=37985, infoSecurePort=0, ipcPort=34129, storageInfo=lv=-57;cid=testClusterID;nsid=139952639;c=1689159509229), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 10:58:29,910 DEBUG [Listener at localhost/34129] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4 2023-07-12 10:58:29,912 INFO [Listener at localhost/34129] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/zookeeper_0, clientPort=55124, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 10:58:29,913 INFO [Listener at localhost/34129] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55124 2023-07-12 10:58:29,913 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:29,914 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:29,931 INFO [Listener at localhost/34129] util.FSUtils(471): Created version file at hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab with version=8 2023-07-12 10:58:29,931 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36583/user/jenkins/test-data/12af9e36-5ccd-4026-1f43-6ce15f43e1c4/hbase-staging 2023-07-12 10:58:29,932 DEBUG [Listener at localhost/34129] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 10:58:29,932 DEBUG [Listener at localhost/34129] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 10:58:29,932 DEBUG [Listener at localhost/34129] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 10:58:29,932 DEBUG [Listener at localhost/34129] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:29,933 INFO [Listener at localhost/34129] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:29,934 INFO [Listener at localhost/34129] ipc.NettyRpcServer(120): Bind to /172.31.2.10:46627 2023-07-12 10:58:29,935 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:29,935 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:29,936 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46627 connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:29,943 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:466270x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:29,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46627-0x10159216b370000 connected 2023-07-12 10:58:29,960 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:29,961 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:29,961 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:29,962 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46627 2023-07-12 10:58:29,964 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46627 2023-07-12 10:58:29,965 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46627 2023-07-12 10:58:29,966 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46627 2023-07-12 10:58:29,969 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46627 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:29,971 INFO [Listener at localhost/34129] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:29,972 INFO [Listener at localhost/34129] http.HttpServer(1146): Jetty bound to port 33623 2023-07-12 10:58:29,972 INFO [Listener at localhost/34129] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:29,973 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:29,973 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42f238c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:29,974 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:29,974 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@fb17cb5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:30,088 INFO [Listener at localhost/34129] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:30,090 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:30,090 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:30,090 INFO [Listener at localhost/34129] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 10:58:30,091 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,093 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@375d08da{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/jetty-0_0_0_0-33623-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1591004025346418974/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:30,094 INFO [Listener at localhost/34129] server.AbstractConnector(333): Started ServerConnector@66a56199{HTTP/1.1, (http/1.1)}{0.0.0.0:33623} 2023-07-12 10:58:30,094 INFO [Listener at localhost/34129] server.Server(415): Started @40993ms 2023-07-12 10:58:30,095 INFO [Listener at localhost/34129] master.HMaster(444): hbase.rootdir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab, hbase.cluster.distributed=false 2023-07-12 10:58:30,107 INFO [Listener at localhost/34129] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:30,107 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,107 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,107 INFO [Listener at localhost/34129] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:30,107 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,108 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:30,108 INFO [Listener at localhost/34129] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:30,109 INFO [Listener at localhost/34129] ipc.NettyRpcServer(120): Bind to /172.31.2.10:34407 2023-07-12 10:58:30,109 INFO [Listener at localhost/34129] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:30,111 DEBUG [Listener at localhost/34129] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:30,112 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,113 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,114 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34407 connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:30,117 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:344070x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:30,118 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:344070x0, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:30,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34407-0x10159216b370001 connected 2023-07-12 10:58:30,119 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:30,119 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:30,120 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34407 2023-07-12 10:58:30,120 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34407 2023-07-12 10:58:30,120 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34407 2023-07-12 10:58:30,122 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34407 2023-07-12 10:58:30,122 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34407 2023-07-12 10:58:30,124 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:30,124 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:30,124 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:30,124 INFO [Listener at localhost/34129] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:30,124 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:30,125 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:30,125 INFO [Listener at localhost/34129] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:30,125 INFO [Listener at localhost/34129] http.HttpServer(1146): Jetty bound to port 35681 2023-07-12 10:58:30,125 INFO [Listener at localhost/34129] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:30,128 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,128 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@569e02a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:30,129 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,129 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@445cad02{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:30,252 INFO [Listener at localhost/34129] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:30,252 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:30,253 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:30,253 INFO [Listener at localhost/34129] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:30,254 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,254 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@76b3ed90{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/jetty-0_0_0_0-35681-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8414956790470919001/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:30,255 INFO [Listener at localhost/34129] server.AbstractConnector(333): Started ServerConnector@157b8418{HTTP/1.1, (http/1.1)}{0.0.0.0:35681} 2023-07-12 10:58:30,256 INFO [Listener at localhost/34129] server.Server(415): Started @41155ms 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,267 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:30,268 INFO [Listener at localhost/34129] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:30,269 INFO [Listener at localhost/34129] ipc.NettyRpcServer(120): Bind to /172.31.2.10:35705 2023-07-12 10:58:30,269 INFO [Listener at localhost/34129] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:30,270 DEBUG [Listener at localhost/34129] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:30,271 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,271 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,272 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35705 connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:30,275 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:357050x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:30,277 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35705-0x10159216b370002 connected 2023-07-12 10:58:30,277 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:30,277 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:30,278 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:30,278 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35705 2023-07-12 10:58:30,279 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35705 2023-07-12 10:58:30,281 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35705 2023-07-12 10:58:30,281 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35705 2023-07-12 10:58:30,281 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35705 2023-07-12 10:58:30,283 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:30,283 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:30,283 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:30,283 INFO [Listener at localhost/34129] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:30,284 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:30,284 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:30,284 INFO [Listener at localhost/34129] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:30,284 INFO [Listener at localhost/34129] http.HttpServer(1146): Jetty bound to port 39533 2023-07-12 10:58:30,284 INFO [Listener at localhost/34129] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:30,289 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,289 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@267d594e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:30,289 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,290 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@9028d76{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:30,400 INFO [Listener at localhost/34129] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:30,401 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:30,401 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:30,401 INFO [Listener at localhost/34129] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:30,402 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,403 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2bfbb1c9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/jetty-0_0_0_0-39533-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8877792518933154692/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:30,405 INFO [Listener at localhost/34129] server.AbstractConnector(333): Started ServerConnector@1972e0d4{HTTP/1.1, (http/1.1)}{0.0.0.0:39533} 2023-07-12 10:58:30,405 INFO [Listener at localhost/34129] server.Server(415): Started @41304ms 2023-07-12 10:58:30,416 INFO [Listener at localhost/34129] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:30,416 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,416 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,416 INFO [Listener at localhost/34129] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:30,417 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:30,417 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:30,417 INFO [Listener at localhost/34129] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:30,418 INFO [Listener at localhost/34129] ipc.NettyRpcServer(120): Bind to /172.31.2.10:40681 2023-07-12 10:58:30,418 INFO [Listener at localhost/34129] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:30,419 DEBUG [Listener at localhost/34129] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:30,419 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,420 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,421 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40681 connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:30,425 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:406810x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:30,426 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:406810x0, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:30,427 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40681-0x10159216b370003 connected 2023-07-12 10:58:30,427 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:30,427 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:30,428 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40681 2023-07-12 10:58:30,428 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40681 2023-07-12 10:58:30,428 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40681 2023-07-12 10:58:30,429 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40681 2023-07-12 10:58:30,429 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40681 2023-07-12 10:58:30,430 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:30,431 INFO [Listener at localhost/34129] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:30,432 INFO [Listener at localhost/34129] http.HttpServer(1146): Jetty bound to port 34205 2023-07-12 10:58:30,432 INFO [Listener at localhost/34129] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:30,441 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,441 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@546e1439{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:30,441 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,441 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6047822b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:30,556 INFO [Listener at localhost/34129] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:30,556 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:30,556 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:30,557 INFO [Listener at localhost/34129] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:30,557 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:30,558 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6d26fc67{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/jetty-0_0_0_0-34205-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8684247119595659993/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:30,559 INFO [Listener at localhost/34129] server.AbstractConnector(333): Started ServerConnector@3b21876d{HTTP/1.1, (http/1.1)}{0.0.0.0:34205} 2023-07-12 10:58:30,560 INFO [Listener at localhost/34129] server.Server(415): Started @41459ms 2023-07-12 10:58:30,562 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:30,564 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@25cd0fa1{HTTP/1.1, (http/1.1)}{0.0.0.0:41329} 2023-07-12 10:58:30,565 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @41463ms 2023-07-12 10:58:30,565 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,567 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:30,567 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,569 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:30,569 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:30,569 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,569 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:30,569 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:30,570 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:30,572 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,46627,1689159509932 from backup master directory 2023-07-12 10:58:30,572 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:30,573 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,573 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:30,573 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 10:58:30,573 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,595 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/hbase.id with ID: d3cf418a-0f31-4337-b4c6-8f5772beddf1 2023-07-12 10:58:30,606 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:30,608 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,621 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7b6e41fe to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:30,627 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b67f7df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:30,627 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:30,627 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 10:58:30,628 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:30,629 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store-tmp 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:30,639 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:30,639 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:30,639 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:30,640 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/WALs/jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,642 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C46627%2C1689159509932, suffix=, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/WALs/jenkins-hbase9.apache.org,46627,1689159509932, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/oldWALs, maxLogs=10 2023-07-12 10:58:30,657 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:30,659 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:30,659 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:30,661 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/WALs/jenkins-hbase9.apache.org,46627,1689159509932/jenkins-hbase9.apache.org%2C46627%2C1689159509932.1689159510642 2023-07-12 10:58:30,662 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK], DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK]] 2023-07-12 10:58:30,662 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:30,662 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:30,662 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,662 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,663 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,664 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 10:58:30,665 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 10:58:30,665 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:30,666 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,666 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,669 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 10:58:30,671 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:30,671 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11402654720, jitterRate=0.06195497512817383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:30,671 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:30,671 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 10:58:30,673 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 10:58:30,673 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 10:58:30,673 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 10:58:30,673 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 10:58:30,673 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 10:58:30,674 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 10:58:30,674 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 10:58:30,675 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 10:58:30,676 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 10:58:30,676 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 10:58:30,676 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 10:58:30,678 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,678 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 10:58:30,678 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 10:58:30,679 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 10:58:30,680 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:30,680 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:30,680 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:30,680 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:30,680 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,681 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,46627,1689159509932, sessionid=0x10159216b370000, setting cluster-up flag (Was=false) 2023-07-12 10:58:30,686 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,690 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 10:58:30,690 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,694 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,698 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 10:58:30,699 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:30,699 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.hbase-snapshot/.tmp 2023-07-12 10:58:30,700 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 10:58:30,700 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 10:58:30,701 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 10:58:30,702 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:30,702 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 10:58:30,703 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:30,713 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:30,713 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:30,713 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 10:58:30,713 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:30,714 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689159540716 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 10:58:30,716 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 10:58:30,716 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 10:58:30,716 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,717 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 10:58:30,717 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 10:58:30,717 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 10:58:30,717 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 10:58:30,718 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 10:58:30,718 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:30,718 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159510718,5,FailOnTimeoutGroup] 2023-07-12 10:58:30,718 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159510718,5,FailOnTimeoutGroup] 2023-07-12 10:58:30,718 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,718 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 10:58:30,718 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,718 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,732 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:30,733 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:30,733 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab 2023-07-12 10:58:30,744 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:30,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:30,746 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/info 2023-07-12 10:58:30,747 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:30,747 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:30,747 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:30,748 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:30,749 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:30,749 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:30,749 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:30,750 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/table 2023-07-12 10:58:30,750 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:30,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:30,751 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740 2023-07-12 10:58:30,752 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740 2023-07-12 10:58:30,753 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:30,754 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:30,756 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:30,756 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9526900320, jitterRate=-0.11273826658725739}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:30,756 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:30,756 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:30,756 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:30,756 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:30,756 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:30,757 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:30,757 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:30,757 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:30,757 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 10:58:30,758 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 10:58:30,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 10:58:30,758 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 10:58:30,760 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 10:58:30,762 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(951): ClusterId : d3cf418a-0f31-4337-b4c6-8f5772beddf1 2023-07-12 10:58:30,762 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(951): ClusterId : d3cf418a-0f31-4337-b4c6-8f5772beddf1 2023-07-12 10:58:30,762 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(951): ClusterId : d3cf418a-0f31-4337-b4c6-8f5772beddf1 2023-07-12 10:58:30,762 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:30,762 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:30,762 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:30,764 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:30,764 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:30,765 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:30,764 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:30,765 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:30,765 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:30,767 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:30,769 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:30,769 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ReadOnlyZKClient(139): Connect 0x7d6c1e97 to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:30,770 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:30,772 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ReadOnlyZKClient(139): Connect 0x18eac03c to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:30,772 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ReadOnlyZKClient(139): Connect 0x30826121 to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:30,780 DEBUG [RS:0;jenkins-hbase9:34407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73abff4a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:30,780 DEBUG [RS:0;jenkins-hbase9:34407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@779475e7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:30,782 DEBUG [RS:1;jenkins-hbase9:35705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ed82132, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:30,783 DEBUG [RS:1;jenkins-hbase9:35705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29b85d1c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:30,783 DEBUG [RS:2;jenkins-hbase9:40681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34bde3e9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:30,783 DEBUG [RS:2;jenkins-hbase9:40681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71bf6b1a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:30,790 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:35705 2023-07-12 10:58:30,791 INFO [RS:1;jenkins-hbase9:35705] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:30,791 INFO [RS:1;jenkins-hbase9:35705] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:30,791 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:30,791 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,46627,1689159509932 with isa=jenkins-hbase9.apache.org/172.31.2.10:35705, startcode=1689159510267 2023-07-12 10:58:30,791 DEBUG [RS:1;jenkins-hbase9:35705] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:30,792 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:34407 2023-07-12 10:58:30,792 INFO [RS:0;jenkins-hbase9:34407] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:30,792 INFO [RS:0;jenkins-hbase9:34407] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:30,792 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:30,793 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:38797, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:30,793 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,46627,1689159509932 with isa=jenkins-hbase9.apache.org/172.31.2.10:34407, startcode=1689159510107 2023-07-12 10:58:30,794 DEBUG [RS:0;jenkins-hbase9:34407] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:30,795 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46627] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,795 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:30,795 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 10:58:30,796 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab 2023-07-12 10:58:30,796 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:40681 2023-07-12 10:58:30,796 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32939 2023-07-12 10:58:30,796 INFO [RS:2;jenkins-hbase9:40681] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:30,796 INFO [RS:2;jenkins-hbase9:40681] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:30,796 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33623 2023-07-12 10:58:30,796 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:30,797 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,46627,1689159509932 with isa=jenkins-hbase9.apache.org/172.31.2.10:40681, startcode=1689159510416 2023-07-12 10:58:30,797 DEBUG [RS:2;jenkins-hbase9:40681] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:30,797 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46199, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:30,797 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46627] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:30,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 10:58:30,798 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab 2023-07-12 10:58:30,798 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32939 2023-07-12 10:58:30,798 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:45381, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:30,798 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33623 2023-07-12 10:58:30,798 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46627] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:30,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 10:58:30,798 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:30,799 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab 2023-07-12 10:58:30,799 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32939 2023-07-12 10:58:30,799 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33623 2023-07-12 10:58:30,802 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,802 WARN [RS:1;jenkins-hbase9:35705] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:30,802 INFO [RS:1;jenkins-hbase9:35705] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:30,802 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,803 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,40681,1689159510416] 2023-07-12 10:58:30,803 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,803 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,35705,1689159510267] 2023-07-12 10:58:30,803 WARN [RS:0;jenkins-hbase9:34407] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:30,803 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,34407,1689159510107] 2023-07-12 10:58:30,803 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,803 INFO [RS:0;jenkins-hbase9:34407] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:30,803 WARN [RS:2;jenkins-hbase9:40681] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:30,803 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,803 INFO [RS:2;jenkins-hbase9:40681] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:30,803 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,814 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,814 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,814 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,814 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,814 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,814 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,815 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,815 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,815 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,815 DEBUG [RS:1;jenkins-hbase9:35705] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:30,815 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:30,815 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:30,816 INFO [RS:1;jenkins-hbase9:35705] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:30,816 INFO [RS:2;jenkins-hbase9:40681] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:30,816 INFO [RS:0;jenkins-hbase9:34407] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:30,817 INFO [RS:1;jenkins-hbase9:35705] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:30,818 INFO [RS:1;jenkins-hbase9:35705] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:30,818 INFO [RS:0;jenkins-hbase9:34407] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:30,818 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,819 INFO [RS:0;jenkins-hbase9:34407] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:30,819 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,819 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:30,820 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:30,820 INFO [RS:2;jenkins-hbase9:40681] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:30,821 INFO [RS:2;jenkins-hbase9:40681] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:30,821 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,825 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:30,825 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,825 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,828 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,828 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,828 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,828 DEBUG [RS:0;jenkins-hbase9:34407] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,828 DEBUG [RS:1;jenkins-hbase9:35705] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,827 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,828 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,829 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,829 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,829 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,829 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,829 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,830 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,830 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,830 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,830 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,830 DEBUG [RS:2;jenkins-hbase9:40681] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:30,833 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,833 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,833 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,846 INFO [RS:1;jenkins-hbase9:35705] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:30,846 INFO [RS:0;jenkins-hbase9:34407] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:30,846 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,35705,1689159510267-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,846 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,34407,1689159510107-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,850 INFO [RS:2;jenkins-hbase9:40681] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:30,850 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,40681,1689159510416-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:30,857 INFO [RS:1;jenkins-hbase9:35705] regionserver.Replication(203): jenkins-hbase9.apache.org,35705,1689159510267 started 2023-07-12 10:58:30,857 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,35705,1689159510267, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:35705, sessionid=0x10159216b370002 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35705,1689159510267' 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:30,858 INFO [RS:0;jenkins-hbase9:34407] regionserver.Replication(203): jenkins-hbase9.apache.org,34407,1689159510107 started 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:30,858 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:30,859 DEBUG [RS:1;jenkins-hbase9:35705] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:30,859 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35705,1689159510267' 2023-07-12 10:58:30,859 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:30,858 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,34407,1689159510107, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:34407, sessionid=0x10159216b370001 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34407,1689159510107' 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:30,859 DEBUG [RS:1;jenkins-hbase9:35705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:30,859 DEBUG [RS:1;jenkins-hbase9:35705] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:30,859 INFO [RS:1;jenkins-hbase9:35705] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:30,859 INFO [RS:1;jenkins-hbase9:35705] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:30,859 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:30,860 DEBUG [RS:0;jenkins-hbase9:34407] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:30,860 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,34407,1689159510107' 2023-07-12 10:58:30,860 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:30,860 DEBUG [RS:0;jenkins-hbase9:34407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:30,860 DEBUG [RS:0;jenkins-hbase9:34407] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:30,860 INFO [RS:0;jenkins-hbase9:34407] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:30,860 INFO [RS:0;jenkins-hbase9:34407] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:30,865 INFO [RS:2;jenkins-hbase9:40681] regionserver.Replication(203): jenkins-hbase9.apache.org,40681,1689159510416 started 2023-07-12 10:58:30,865 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,40681,1689159510416, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:40681, sessionid=0x10159216b370003 2023-07-12 10:58:30,865 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:30,865 DEBUG [RS:2;jenkins-hbase9:40681] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,865 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40681,1689159510416' 2023-07-12 10:58:30,865 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:30,865 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40681,1689159510416' 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:30,866 DEBUG [RS:2;jenkins-hbase9:40681] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:30,866 INFO [RS:2;jenkins-hbase9:40681] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:30,866 INFO [RS:2;jenkins-hbase9:40681] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:30,910 DEBUG [jenkins-hbase9:46627] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:30,911 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40681,1689159510416, state=OPENING 2023-07-12 10:58:30,913 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 10:58:30,915 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:30,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40681,1689159510416}] 2023-07-12 10:58:30,915 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:30,961 INFO [RS:1;jenkins-hbase9:35705] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C35705%2C1689159510267, suffix=, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,35705,1689159510267, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs, maxLogs=32 2023-07-12 10:58:30,962 INFO [RS:0;jenkins-hbase9:34407] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C34407%2C1689159510107, suffix=, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,34407,1689159510107, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs, maxLogs=32 2023-07-12 10:58:30,968 INFO [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40681%2C1689159510416, suffix=, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,40681,1689159510416, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs, maxLogs=32 2023-07-12 10:58:30,988 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:30,988 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:30,988 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:30,996 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:30,996 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:30,996 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:30,997 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:30,999 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:30,999 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:31,001 INFO [RS:1;jenkins-hbase9:35705] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,35705,1689159510267/jenkins-hbase9.apache.org%2C35705%2C1689159510267.1689159510962 2023-07-12 10:58:31,001 INFO [RS:0;jenkins-hbase9:34407] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,34407,1689159510107/jenkins-hbase9.apache.org%2C34407%2C1689159510107.1689159510962 2023-07-12 10:58:31,001 DEBUG [RS:1;jenkins-hbase9:35705] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK], DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK]] 2023-07-12 10:58:31,001 INFO [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,40681,1689159510416/jenkins-hbase9.apache.org%2C40681%2C1689159510416.1689159510969 2023-07-12 10:58:31,001 DEBUG [RS:0;jenkins-hbase9:34407] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK], DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK]] 2023-07-12 10:58:31,004 DEBUG [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK], DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK]] 2023-07-12 10:58:31,007 WARN [ReadOnlyZKClient-127.0.0.1:55124@0x7b6e41fe] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 10:58:31,007 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:31,008 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:31,009 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40681] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:58340 deadline: 1689159571009, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:31,070 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:31,071 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:31,073 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:31,078 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 10:58:31,078 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:31,080 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40681%2C1689159510416.meta, suffix=.meta, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,40681,1689159510416, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs, maxLogs=32 2023-07-12 10:58:31,101 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:31,101 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:31,101 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:31,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,40681,1689159510416/jenkins-hbase9.apache.org%2C40681%2C1689159510416.meta.1689159511080.meta 2023-07-12 10:58:31,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK], DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK]] 2023-07-12 10:58:31,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 10:58:31,106 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 10:58:31,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 10:58:31,110 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 10:58:31,111 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/info 2023-07-12 10:58:31,111 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/info 2023-07-12 10:58:31,111 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 10:58:31,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:31,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 10:58:31,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:31,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/rep_barrier 2023-07-12 10:58:31,113 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 10:58:31,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:31,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 10:58:31,114 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/table 2023-07-12 10:58:31,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/table 2023-07-12 10:58:31,115 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 10:58:31,115 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:31,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740 2023-07-12 10:58:31,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740 2023-07-12 10:58:31,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 10:58:31,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 10:58:31,122 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10611734560, jitterRate=-0.011705204844474792}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 10:58:31,122 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 10:58:31,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689159511070 2023-07-12 10:58:31,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 10:58:31,128 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 10:58:31,128 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40681,1689159510416, state=OPEN 2023-07-12 10:58:31,130 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 10:58:31,130 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 10:58:31,131 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 10:58:31,131 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40681,1689159510416 in 215 msec 2023-07-12 10:58:31,133 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 10:58:31,133 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-07-12 10:58:31,135 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 432 msec 2023-07-12 10:58:31,135 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689159511135, completionTime=-1 2023-07-12 10:58:31,135 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 10:58:31,135 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 10:58:31,139 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 10:58:31,139 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689159571139 2023-07-12 10:58:31,139 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689159631139 2023-07-12 10:58:31,139 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-12 10:58:31,144 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46627,1689159509932-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,144 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46627,1689159509932-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,144 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46627,1689159509932-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,144 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:46627, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,145 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:31,145 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 10:58:31,145 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:31,146 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 10:58:31,146 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 10:58:31,147 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:31,148 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:31,149 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,150 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6 empty. 2023-07-12 10:58:31,150 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,151 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 10:58:31,168 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:31,169 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4c57f39da2f5ffa50fe4c4949be814b6, NAME => 'hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4c57f39da2f5ffa50fe4c4949be814b6, disabling compactions & flushes 2023-07-12 10:58:31,183 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. after waiting 0 ms 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,183 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,183 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4c57f39da2f5ffa50fe4c4949be814b6: 2023-07-12 10:58:31,185 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:31,186 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159511186"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159511186"}]},"ts":"1689159511186"} 2023-07-12 10:58:31,189 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:31,189 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:31,190 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159511189"}]},"ts":"1689159511189"} 2023-07-12 10:58:31,192 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 10:58:31,196 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:31,196 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:31,196 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:31,196 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:31,196 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:31,196 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4c57f39da2f5ffa50fe4c4949be814b6, ASSIGN}] 2023-07-12 10:58:31,198 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4c57f39da2f5ffa50fe4c4949be814b6, ASSIGN 2023-07-12 10:58:31,199 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4c57f39da2f5ffa50fe4c4949be814b6, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,34407,1689159510107; forceNewPlan=false, retain=false 2023-07-12 10:58:31,312 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:31,314 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 10:58:31,316 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:31,317 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:31,318 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,319 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec empty. 2023-07-12 10:58:31,319 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,319 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 10:58:31,332 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:31,333 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 34c9471a445b0a39d3a7aea3fe2954ec, NAME => 'hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 34c9471a445b0a39d3a7aea3fe2954ec, disabling compactions & flushes 2023-07-12 10:58:31,342 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. after waiting 0 ms 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,342 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,342 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 34c9471a445b0a39d3a7aea3fe2954ec: 2023-07-12 10:58:31,344 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:31,345 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159511345"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159511345"}]},"ts":"1689159511345"} 2023-07-12 10:58:31,346 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:31,347 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:31,347 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159511347"}]},"ts":"1689159511347"} 2023-07-12 10:58:31,348 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 10:58:31,349 INFO [jenkins-hbase9:46627] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:31,351 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4c57f39da2f5ffa50fe4c4949be814b6, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:31,351 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159511351"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159511351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159511351"}]},"ts":"1689159511351"} 2023-07-12 10:58:31,352 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 4c57f39da2f5ffa50fe4c4949be814b6, server=jenkins-hbase9.apache.org,34407,1689159510107}] 2023-07-12 10:58:31,352 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:31,352 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:31,352 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:31,352 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:31,352 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:31,352 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=34c9471a445b0a39d3a7aea3fe2954ec, ASSIGN}] 2023-07-12 10:58:31,354 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=34c9471a445b0a39d3a7aea3fe2954ec, ASSIGN 2023-07-12 10:58:31,354 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=34c9471a445b0a39d3a7aea3fe2954ec, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,34407,1689159510107; forceNewPlan=false, retain=false 2023-07-12 10:58:31,505 INFO [jenkins-hbase9:46627] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:31,505 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:31,506 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=34c9471a445b0a39d3a7aea3fe2954ec, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:31,506 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 10:58:31,507 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159511506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159511506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159511506"}]},"ts":"1689159511506"} 2023-07-12 10:58:31,508 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34840, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 10:58:31,509 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 34c9471a445b0a39d3a7aea3fe2954ec, server=jenkins-hbase9.apache.org,34407,1689159510107}] 2023-07-12 10:58:31,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c57f39da2f5ffa50fe4c4949be814b6, NAME => 'hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:31,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:31,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,517 INFO [StoreOpener-4c57f39da2f5ffa50fe4c4949be814b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,519 DEBUG [StoreOpener-4c57f39da2f5ffa50fe4c4949be814b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/info 2023-07-12 10:58:31,519 DEBUG [StoreOpener-4c57f39da2f5ffa50fe4c4949be814b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/info 2023-07-12 10:58:31,519 INFO [StoreOpener-4c57f39da2f5ffa50fe4c4949be814b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c57f39da2f5ffa50fe4c4949be814b6 columnFamilyName info 2023-07-12 10:58:31,520 INFO [StoreOpener-4c57f39da2f5ffa50fe4c4949be814b6-1] regionserver.HStore(310): Store=4c57f39da2f5ffa50fe4c4949be814b6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:31,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:31,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:31,527 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 4c57f39da2f5ffa50fe4c4949be814b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10245861600, jitterRate=-0.04577977955341339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:31,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 4c57f39da2f5ffa50fe4c4949be814b6: 2023-07-12 10:58:31,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6., pid=7, masterSystemTime=1689159511505 2023-07-12 10:58:31,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,533 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:31,534 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4c57f39da2f5ffa50fe4c4949be814b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:31,534 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689159511534"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159511534"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159511534"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159511534"}]},"ts":"1689159511534"} 2023-07-12 10:58:31,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 10:58:31,537 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 4c57f39da2f5ffa50fe4c4949be814b6, server=jenkins-hbase9.apache.org,34407,1689159510107 in 183 msec 2023-07-12 10:58:31,538 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 10:58:31,538 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4c57f39da2f5ffa50fe4c4949be814b6, ASSIGN in 341 msec 2023-07-12 10:58:31,539 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:31,539 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159511539"}]},"ts":"1689159511539"} 2023-07-12 10:58:31,540 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 10:58:31,545 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:31,547 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 10:58:31,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 401 msec 2023-07-12 10:58:31,548 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:31,549 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:31,551 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:31,554 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34846, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:31,556 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 10:58:31,564 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:31,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-12 10:58:31,578 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:31,582 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-12 10:58:31,582 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 10:58:31,664 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 34c9471a445b0a39d3a7aea3fe2954ec, NAME => 'hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:31,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 10:58:31,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. service=MultiRowMutationService 2023-07-12 10:58:31,665 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 10:58:31,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:31,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,666 INFO [StoreOpener-34c9471a445b0a39d3a7aea3fe2954ec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,667 DEBUG [StoreOpener-34c9471a445b0a39d3a7aea3fe2954ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/m 2023-07-12 10:58:31,667 DEBUG [StoreOpener-34c9471a445b0a39d3a7aea3fe2954ec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/m 2023-07-12 10:58:31,668 INFO [StoreOpener-34c9471a445b0a39d3a7aea3fe2954ec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 34c9471a445b0a39d3a7aea3fe2954ec columnFamilyName m 2023-07-12 10:58:31,668 INFO [StoreOpener-34c9471a445b0a39d3a7aea3fe2954ec-1] regionserver.HStore(310): Store=34c9471a445b0a39d3a7aea3fe2954ec/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:31,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,672 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:31,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:31,674 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 34c9471a445b0a39d3a7aea3fe2954ec; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@21e71282, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:31,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 34c9471a445b0a39d3a7aea3fe2954ec: 2023-07-12 10:58:31,674 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec., pid=9, masterSystemTime=1689159511661 2023-07-12 10:58:31,676 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,676 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:31,676 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=34c9471a445b0a39d3a7aea3fe2954ec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:31,676 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689159511676"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159511676"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159511676"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159511676"}]},"ts":"1689159511676"} 2023-07-12 10:58:31,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-12 10:58:31,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 34c9471a445b0a39d3a7aea3fe2954ec, server=jenkins-hbase9.apache.org,34407,1689159510107 in 169 msec 2023-07-12 10:58:31,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 10:58:31,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=34c9471a445b0a39d3a7aea3fe2954ec, ASSIGN in 328 msec 2023-07-12 10:58:31,688 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:31,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 117 msec 2023-07-12 10:58:31,696 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:31,696 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159511696"}]},"ts":"1689159511696"} 2023-07-12 10:58:31,698 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 10:58:31,700 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:31,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 387 msec 2023-07-12 10:58:31,704 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 10:58:31,706 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 10:58:31,706 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.133sec 2023-07-12 10:58:31,706 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 10:58:31,706 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 10:58:31,707 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 10:58:31,707 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46627,1689159509932-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 10:58:31,707 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,46627,1689159509932-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 10:58:31,711 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 10:58:31,718 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 10:58:31,718 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 10:58:31,722 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:31,722 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:31,725 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:31,726 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 10:58:31,763 DEBUG [Listener at localhost/34129] zookeeper.ReadOnlyZKClient(139): Connect 0x41854f52 to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:31,768 DEBUG [Listener at localhost/34129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64cdcbec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:31,770 DEBUG [hconnection-0x3ed2d6d6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:31,772 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:31,773 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:31,773 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:31,776 DEBUG [Listener at localhost/34129] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 10:58:31,777 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:48434, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 10:58:31,781 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 10:58:31,781 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:31,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-12 10:58:31,783 DEBUG [Listener at localhost/34129] zookeeper.ReadOnlyZKClient(139): Connect 0x5ba1a8f2 to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:31,788 DEBUG [Listener at localhost/34129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@736db136, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:31,788 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:31,797 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:31,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10159216b37000a connected 2023-07-12 10:58:31,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:31,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:31,804 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 10:58:31,816 INFO [Listener at localhost/34129] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-12 10:58:31,816 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:31,816 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:31,817 INFO [Listener at localhost/34129] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 10:58:31,817 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 10:58:31,817 INFO [Listener at localhost/34129] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 10:58:31,817 INFO [Listener at localhost/34129] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 10:58:31,817 INFO [Listener at localhost/34129] ipc.NettyRpcServer(120): Bind to /172.31.2.10:41989 2023-07-12 10:58:31,818 INFO [Listener at localhost/34129] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 10:58:31,819 DEBUG [Listener at localhost/34129] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 10:58:31,820 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:31,820 INFO [Listener at localhost/34129] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 10:58:31,821 INFO [Listener at localhost/34129] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41989 connecting to ZooKeeper ensemble=127.0.0.1:55124 2023-07-12 10:58:31,824 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:419890x0, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 10:58:31,825 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(162): regionserver:419890x0, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 10:58:31,826 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41989-0x10159216b37000b connected 2023-07-12 10:58:31,827 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 10:58:31,827 DEBUG [Listener at localhost/34129] zookeeper.ZKUtil(164): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 10:58:31,828 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41989 2023-07-12 10:58:31,829 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41989 2023-07-12 10:58:31,831 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41989 2023-07-12 10:58:31,832 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41989 2023-07-12 10:58:31,832 DEBUG [Listener at localhost/34129] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41989 2023-07-12 10:58:31,834 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 10:58:31,834 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 10:58:31,834 INFO [Listener at localhost/34129] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] http.HttpServer(1146): Jetty bound to port 39717 2023-07-12 10:58:31,835 INFO [Listener at localhost/34129] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 10:58:31,842 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:31,842 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f0d392b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,AVAILABLE} 2023-07-12 10:58:31,842 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:31,842 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@123c341c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 10:58:31,958 INFO [Listener at localhost/34129] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 10:58:31,959 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 10:58:31,959 INFO [Listener at localhost/34129] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 10:58:31,959 INFO [Listener at localhost/34129] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 10:58:31,960 INFO [Listener at localhost/34129] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 10:58:31,960 INFO [Listener at localhost/34129] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@484f4c26{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/java.io.tmpdir/jetty-0_0_0_0-39717-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4471427565333186549/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:31,962 INFO [Listener at localhost/34129] server.AbstractConnector(333): Started ServerConnector@10f687cb{HTTP/1.1, (http/1.1)}{0.0.0.0:39717} 2023-07-12 10:58:31,963 INFO [Listener at localhost/34129] server.Server(415): Started @42861ms 2023-07-12 10:58:31,966 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(951): ClusterId : d3cf418a-0f31-4337-b4c6-8f5772beddf1 2023-07-12 10:58:31,966 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 10:58:31,969 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 10:58:31,969 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 10:58:31,972 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 10:58:31,976 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ReadOnlyZKClient(139): Connect 0x08acd93f to 127.0.0.1:55124 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 10:58:31,995 DEBUG [RS:3;jenkins-hbase9:41989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66b365a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 10:58:31,995 DEBUG [RS:3;jenkins-hbase9:41989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@254b24f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:32,010 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:41989 2023-07-12 10:58:32,010 INFO [RS:3;jenkins-hbase9:41989] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 10:58:32,010 INFO [RS:3;jenkins-hbase9:41989] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 10:58:32,010 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 10:58:32,011 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,46627,1689159509932 with isa=jenkins-hbase9.apache.org/172.31.2.10:41989, startcode=1689159511816 2023-07-12 10:58:32,011 DEBUG [RS:3;jenkins-hbase9:41989] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 10:58:32,021 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46011, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 10:58:32,021 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46627] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 10:58:32,024 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab 2023-07-12 10:58:32,024 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32939 2023-07-12 10:58:32,024 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33623 2023-07-12 10:58:32,030 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:32,030 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:32,030 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:32,030 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:32,031 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:32,031 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,031 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,41989,1689159511816] 2023-07-12 10:58:32,031 WARN [RS:3;jenkins-hbase9:41989] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 10:58:32,031 INFO [RS:3;jenkins-hbase9:41989] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 10:58:32,032 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:32,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:32,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:32,032 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 10:58:32,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:32,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:32,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:32,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,034 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 10:58:32,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,042 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:32,043 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:32,043 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,043 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ZKUtil(162): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,044 DEBUG [RS:3;jenkins-hbase9:41989] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 10:58:32,045 INFO [RS:3;jenkins-hbase9:41989] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 10:58:32,046 INFO [RS:3;jenkins-hbase9:41989] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 10:58:32,048 INFO [RS:3;jenkins-hbase9:41989] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 10:58:32,048 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,048 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 10:58:32,050 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,050 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-12 10:58:32,051 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,051 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,051 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,051 DEBUG [RS:3;jenkins-hbase9:41989] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-12 10:58:32,054 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,055 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,055 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,072 INFO [RS:3;jenkins-hbase9:41989] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 10:58:32,072 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41989,1689159511816-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 10:58:32,087 INFO [RS:3;jenkins-hbase9:41989] regionserver.Replication(203): jenkins-hbase9.apache.org,41989,1689159511816 started 2023-07-12 10:58:32,087 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,41989,1689159511816, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:41989, sessionid=0x10159216b37000b 2023-07-12 10:58:32,087 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 10:58:32,087 DEBUG [RS:3;jenkins-hbase9:41989] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,087 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41989,1689159511816' 2023-07-12 10:58:32,087 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:32,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41989,1689159511816' 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 10:58:32,088 DEBUG [RS:3;jenkins-hbase9:41989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 10:58:32,089 DEBUG [RS:3;jenkins-hbase9:41989] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 10:58:32,089 INFO [RS:3;jenkins-hbase9:41989] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 10:58:32,089 INFO [RS:3;jenkins-hbase9:41989] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 10:58:32,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:32,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:32,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:32,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:32,095 DEBUG [hconnection-0x24f59cfd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:32,097 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:32,101 DEBUG [hconnection-0x24f59cfd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 10:58:32,102 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:34854, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 10:58:32,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:32,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:32,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:32,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:32,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:48434 deadline: 1689160712106, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:32,107 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:32,108 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:32,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:32,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:32,109 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:32,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:32,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:32,157 INFO [Listener at localhost/34129] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=559 (was 506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/44365-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:44937 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50934@0x649dfbc5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x24f59cfd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1465603782-2548 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-43995fad-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:44762 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1465603782-2550 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x41854f52-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1465603782-2546 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:45928 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1465603782-2547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,46627,1689159509932 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@69823956 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x08acd93f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_629197992_17 at /127.0.0.1:59700 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1527379374-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab-prefix:jenkins-hbase9.apache.org,34407,1689159510107 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData-prefix:jenkins-hbase9.apache.org,46627,1689159509932 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 32939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 474283062@qtp-1683980461-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38083 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-79271683-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab-prefix:jenkins-hbase9.apache.org,35705,1689159510267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35235 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50934@0x649dfbc5-SendThread(127.0.0.1:50934) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@638306e1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1527379374-2214 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:44937 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase9:34407 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@702aad4d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x18eac03c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase9:41989 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_629197992_17 at /127.0.0.1:47632 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data1/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7d6c1e97-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 32939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:1;jenkins-hbase9:35705 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2237 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1527379374-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x41854f52 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab-prefix:jenkins-hbase9.apache.org,40681,1689159510416.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:44937 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-12826d0e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x30826121-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Server handler 4 on default port 37717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x5ba1a8f2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1357937132@qtp-628238282-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50934@0x649dfbc5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:44674 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1452381735-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp288103467-2179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:32939 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:44764 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x08acd93f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:59796 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:55124): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp1465603782-2549 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7d6c1e97-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159510718 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:44937 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x18eac03c-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7b6e41fe sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@a9456f3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4d88ce9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 32939 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: M:0;jenkins-hbase9:46627 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 34129 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase9:35705-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-268348601_17 at /127.0.0.1:45888 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp288103467-2178 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data5/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase9:40681-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7b6e41fe-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp722134192-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data4/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase9:40681 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_629197992_17 at /127.0.0.1:59784 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1465603782-2544-acceptor-0@41cf0831-ServerConnector@10f687cb{HTTP/1.1, (http/1.1)}{0.0.0.0:39717} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x5ba1a8f2-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_629197992_17 at /127.0.0.1:44748 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_225084777_17 at /127.0.0.1:59768 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 2 on default port 34129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:32939 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1452381735-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159510718 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: IPC Server handler 3 on default port 37717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-25e91bdf-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,43287,1689159505012 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp722134192-2280 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1141396998@qtp-628238282-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41625 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:32939 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722134192-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-268348601_17 at /127.0.0.1:59732 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722134192-2282-acceptor-0@1d40ca5e-ServerConnector@25cd0fa1{HTTP/1.1, (http/1.1)}{0.0.0.0:41329} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@106c4a8a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@343c1c61 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/34129.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp288103467-2183 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 373026934@qtp-1683980461-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: jenkins-hbase9:34407Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:45936 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1527379374-2213 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data2/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 177006823@qtp-287361544-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@76cc2238 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:44937 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x7ba085ac-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/34129.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:41989Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab-prefix:jenkins-hbase9.apache.org,40681,1689159510416 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1527379374-2207 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:55124 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x30826121-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:3;jenkins-hbase9:41989-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44937 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1527379374-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@25f7137c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x41854f52-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1465603782-2545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase9:40681Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x18eac03c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 32939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:0;jenkins-hbase9:34407-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 526480773@qtp-617037995-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 34129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1452381735-2267 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7ba085ac-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@d00317 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3ed2d6d6-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1452381735-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 32939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_225084777_17 at /127.0.0.1:45918 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data6/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-46096509-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x30826121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722134192-2281 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34129 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x08acd93f-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7b6e41fe-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp288103467-2180 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:44937 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1026920033@qtp-617037995-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45191 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp722134192-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2238-acceptor-0@2593a5d2-ServerConnector@1972e0d4{HTTP/1.1, (http/1.1)}{0.0.0.0:39533} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp722134192-2279 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1527379374-2208-acceptor-0@1dba1323-ServerConnector@157b8418{HTTP/1.1, (http/1.1)}{0.0.0.0:35681} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@9383571[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1452381735-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1452381735-2268-acceptor-0@5879898c-ServerConnector@3b21876d{HTTP/1.1, (http/1.1)}{0.0.0.0:34205} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp288103467-2181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1465603782-2543 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data3/current/BP-1956518920-172.31.2.10-1689159509229 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x24f59cfd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1452381735-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34129-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1401388898_17 at /127.0.0.1:59808 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(2028828512) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/34129-SendThread(127.0.0.1:55124) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase9:35705Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1527379374-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5e1ad929 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 32939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:32939 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722134192-2278 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:44937 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x5ba1a8f2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp288103467-2176 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1452381735-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:44937 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6caa0e42 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2cf13362[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3ffb886f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44365-SendThread(127.0.0.1:50934) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_629197992_17 at /127.0.0.1:45920 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55124@0x7d6c1e97 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/903966814.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@54750611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-268348601_17 at /127.0.0.1:44698 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp872002637-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 37717 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/34129 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp288103467-2177-acceptor-0@2095507-ServerConnector@66a56199{HTTP/1.1, (http/1.1)}{0.0.0.0:33623} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_225084777_17 at /127.0.0.1:44732 [Receiving block BP-1956518920-172.31.2.10-1689159509229:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1956518920-172.31.2.10-1689159509229:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:32939 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 655275252@qtp-287361544-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40273 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp288103467-2182 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=833 (was 782) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=280 (was 312), ProcessCount=170 (was 170), AvailableMemoryMB=8050 (was 8199) 2023-07-12 10:58:32,159 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-12 10:58:32,176 INFO [Listener at localhost/34129] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=559, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=280, ProcessCount=170, AvailableMemoryMB=8050 2023-07-12 10:58:32,176 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-12 10:58:32,176 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 10:58:32,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:32,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:32,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:32,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:32,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:32,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:32,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:32,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:32,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:32,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:32,191 INFO [RS:3;jenkins-hbase9:41989] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41989%2C1689159511816, suffix=, logDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,41989,1689159511816, archiveDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs, maxLogs=32 2023-07-12 10:58:32,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:32,203 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:32,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:32,215 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK] 2023-07-12 10:58:32,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:32,216 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK] 2023-07-12 10:58:32,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:32,218 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK] 2023-07-12 10:58:32,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:32,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:32,229 INFO [RS:3;jenkins-hbase9:41989] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,41989,1689159511816/jenkins-hbase9.apache.org%2C41989%2C1689159511816.1689159512191 2023-07-12 10:58:32,229 DEBUG [RS:3;jenkins-hbase9:41989] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45413,DS-db684873-dcb0-4ce8-a01c-db32688013f3,DISK], DatanodeInfoWithStorage[127.0.0.1:45911,DS-2f07e96d-54cc-41c0-b419-84209f6c74b8,DISK], DatanodeInfoWithStorage[127.0.0.1:42553,DS-1e85d00e-90d9-4c42-8e52-102136d4cf89,DISK]] 2023-07-12 10:58:32,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:32,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:32,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:32,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:32,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:48434 deadline: 1689160712234, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:32,234 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:32,236 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:32,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:32,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:32,237 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:32,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:32,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:32,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:32,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 10:58:32,243 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:32,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 10:58:32,244 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:32,245 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:32,245 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:32,247 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 10:58:32,249 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,250 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb empty. 2023-07-12 10:58:32,251 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,251 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 10:58:32,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 10:58:32,295 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 10:58:32,296 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 417cae12f008e1a1b05634bdac01e1bb, NAME => 't1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 417cae12f008e1a1b05634bdac01e1bb, disabling compactions & flushes 2023-07-12 10:58:32,327 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. after waiting 0 ms 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,327 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,327 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 417cae12f008e1a1b05634bdac01e1bb: 2023-07-12 10:58:32,330 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 10:58:32,331 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159512330"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159512330"}]},"ts":"1689159512330"} 2023-07-12 10:58:32,332 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 10:58:32,333 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 10:58:32,333 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159512333"}]},"ts":"1689159512333"} 2023-07-12 10:58:32,334 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 10:58:32,338 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 10:58:32,339 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, ASSIGN}] 2023-07-12 10:58:32,340 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, ASSIGN 2023-07-12 10:58:32,341 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,34407,1689159510107; forceNewPlan=false, retain=false 2023-07-12 10:58:32,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 10:58:32,491 INFO [jenkins-hbase9:46627] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 10:58:32,493 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=417cae12f008e1a1b05634bdac01e1bb, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,493 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159512493"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159512493"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159512493"}]},"ts":"1689159512493"} 2023-07-12 10:58:32,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 417cae12f008e1a1b05634bdac01e1bb, server=jenkins-hbase9.apache.org,34407,1689159510107}] 2023-07-12 10:58:32,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 10:58:32,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 417cae12f008e1a1b05634bdac01e1bb, NAME => 't1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 10:58:32,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 10:58:32,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,651 INFO [StoreOpener-417cae12f008e1a1b05634bdac01e1bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,653 DEBUG [StoreOpener-417cae12f008e1a1b05634bdac01e1bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/cf1 2023-07-12 10:58:32,653 DEBUG [StoreOpener-417cae12f008e1a1b05634bdac01e1bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/cf1 2023-07-12 10:58:32,653 INFO [StoreOpener-417cae12f008e1a1b05634bdac01e1bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 417cae12f008e1a1b05634bdac01e1bb columnFamilyName cf1 2023-07-12 10:58:32,653 INFO [StoreOpener-417cae12f008e1a1b05634bdac01e1bb-1] regionserver.HStore(310): Store=417cae12f008e1a1b05634bdac01e1bb/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 10:58:32,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:32,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 10:58:32,659 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 417cae12f008e1a1b05634bdac01e1bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9785829280, jitterRate=-0.08862362802028656}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 10:58:32,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 417cae12f008e1a1b05634bdac01e1bb: 2023-07-12 10:58:32,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb., pid=14, masterSystemTime=1689159512646 2023-07-12 10:58:32,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:32,661 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=417cae12f008e1a1b05634bdac01e1bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,662 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159512661"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689159512661"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689159512661"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689159512661"}]},"ts":"1689159512661"} 2023-07-12 10:58:32,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 10:58:32,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 417cae12f008e1a1b05634bdac01e1bb, server=jenkins-hbase9.apache.org,34407,1689159510107 in 169 msec 2023-07-12 10:58:32,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 10:58:32,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, ASSIGN in 325 msec 2023-07-12 10:58:32,666 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 10:58:32,666 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159512666"}]},"ts":"1689159512666"} 2023-07-12 10:58:32,667 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 10:58:32,668 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 10:58:32,669 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 10:58:32,672 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 429 msec 2023-07-12 10:58:32,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 10:58:32,855 INFO [Listener at localhost/34129] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 10:58:32,855 DEBUG [Listener at localhost/34129] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 10:58:32,855 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:32,857 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 10:58:32,858 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:32,858 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 10:58:32,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 10:58:32,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 10:58:32,862 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 10:58:32,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 10:58:32,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.2.10:48434 deadline: 1689159572859, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 10:58:32,865 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:32,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=7 msec 2023-07-12 10:58:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:32,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:32,967 INFO [Listener at localhost/34129] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 10:58:32,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable t1 2023-07-12 10:58:32,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 10:58:32,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 10:58:32,971 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159512971"}]},"ts":"1689159512971"} 2023-07-12 10:58:32,972 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 10:58:32,973 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 10:58:32,974 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, UNASSIGN}] 2023-07-12 10:58:32,975 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, UNASSIGN 2023-07-12 10:58:32,975 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=417cae12f008e1a1b05634bdac01e1bb, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:32,975 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159512975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689159512975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689159512975"}]},"ts":"1689159512975"} 2023-07-12 10:58:32,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 417cae12f008e1a1b05634bdac01e1bb, server=jenkins-hbase9.apache.org,34407,1689159510107}] 2023-07-12 10:58:33,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 10:58:33,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:33,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 417cae12f008e1a1b05634bdac01e1bb, disabling compactions & flushes 2023-07-12 10:58:33,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:33,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:33,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. after waiting 0 ms 2023-07-12 10:58:33,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:33,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 10:58:33,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb. 2023-07-12 10:58:33,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 417cae12f008e1a1b05634bdac01e1bb: 2023-07-12 10:58:33,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:33,135 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=417cae12f008e1a1b05634bdac01e1bb, regionState=CLOSED 2023-07-12 10:58:33,135 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689159513135"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689159513135"}]},"ts":"1689159513135"} 2023-07-12 10:58:33,138 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 10:58:33,138 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 417cae12f008e1a1b05634bdac01e1bb, server=jenkins-hbase9.apache.org,34407,1689159510107 in 160 msec 2023-07-12 10:58:33,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 10:58:33,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=417cae12f008e1a1b05634bdac01e1bb, UNASSIGN in 164 msec 2023-07-12 10:58:33,139 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689159513139"}]},"ts":"1689159513139"} 2023-07-12 10:58:33,140 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 10:58:33,142 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 10:58:33,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-12 10:58:33,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 10:58:33,272 INFO [Listener at localhost/34129] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 10:58:33,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete t1 2023-07-12 10:58:33,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 10:58:33,276 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 10:58:33,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 10:58:33,276 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 10:58:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,280 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:33,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 10:58:33,281 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/cf1, FileablePath, hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/recovered.edits] 2023-07-12 10:58:33,286 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/recovered.edits/4.seqid to hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/archive/data/default/t1/417cae12f008e1a1b05634bdac01e1bb/recovered.edits/4.seqid 2023-07-12 10:58:33,286 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/.tmp/data/default/t1/417cae12f008e1a1b05634bdac01e1bb 2023-07-12 10:58:33,286 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 10:58:33,288 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 10:58:33,290 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 10:58:33,291 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 10:58:33,292 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 10:58:33,292 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 10:58:33,292 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689159513292"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:33,294 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 10:58:33,294 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 417cae12f008e1a1b05634bdac01e1bb, NAME => 't1,,1689159512239.417cae12f008e1a1b05634bdac01e1bb.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 10:58:33,294 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 10:58:33,294 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689159513294"}]},"ts":"9223372036854775807"} 2023-07-12 10:58:33,295 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 10:58:33,297 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 10:58:33,298 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-12 10:58:33,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 10:58:33,382 INFO [Listener at localhost/34129] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 10:58:33,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,399 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:48434 deadline: 1689160713408, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,409 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,412 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,413 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,431 INFO [Listener at localhost/34129] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=568 (was 559) - Thread LEAK? -, OpenFileDescriptor=836 (was 833) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=280 (was 280), ProcessCount=170 (was 170), AvailableMemoryMB=7972 (was 8050) 2023-07-12 10:58:33,431 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-12 10:58:33,447 INFO [Listener at localhost/34129] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=568, OpenFileDescriptor=836, MaxFileDescriptor=60000, SystemLoadAverage=280, ProcessCount=170, AvailableMemoryMB=7972 2023-07-12 10:58:33,447 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-12 10:58:33,447 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 10:58:33,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,459 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160713469, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,470 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,471 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,472 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 10:58:33,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:33,475 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 10:58:33,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 10:58:33,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 10:58:33,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,491 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160713501, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,501 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,503 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,504 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,525 INFO [Listener at localhost/34129] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 568) - Thread LEAK? -, OpenFileDescriptor=835 (was 836), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=280 (was 280), ProcessCount=170 (was 170), AvailableMemoryMB=7972 (was 7972) 2023-07-12 10:58:33,525 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-12 10:58:33,543 INFO [Listener at localhost/34129] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=280, ProcessCount=170, AvailableMemoryMB=7971 2023-07-12 10:58:33,543 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-12 10:58:33,543 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 10:58:33,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,555 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160713564, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,564 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,566 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,567 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,587 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160713596, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,596 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,598 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,599 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,618 INFO [Listener at localhost/34129] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570 (was 569) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=280 (was 280), ProcessCount=170 (was 170), AvailableMemoryMB=7971 (was 7971) 2023-07-12 10:58:33,618 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-12 10:58:33,635 INFO [Listener at localhost/34129] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=280, ProcessCount=170, AvailableMemoryMB=7971 2023-07-12 10:58:33,635 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-12 10:58:33,635 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 10:58:33,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:33,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:33,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:33,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:33,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:33,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:33,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:33,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:33,648 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:33,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:33,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:33,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:33,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160713658, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:33,659 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:33,661 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:33,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,662 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:33,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:33,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:33,663 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 10:58:33,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_foo 2023-07-12 10:58:33,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 10:58:33,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 10:58:33,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 10:58:33,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 10:58:33,679 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:33,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 10:58:33,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 10:58:33,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_foo 2023-07-12 10:58:33,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:33,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.2.10:48434 deadline: 1689160713776, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 10:58:33,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$16(3053): Client=jenkins//172.31.2.10 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 10:58:33,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 10:58:33,796 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 10:58:33,797 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-12 10:58:33,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 10:58:33,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_anotherGroup 2023-07-12 10:58:33,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 10:58:33,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:33,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 10:58:33,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:33,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 10:58:33,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:33,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:33,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:33,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete Group_foo 2023-07-12 10:58:33,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,913 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,915 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 10:58:33,916 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,918 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 10:58:33,918 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 10:58:33,918 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,920 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 10:58:33,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 10:58:34,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 10:58:34,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_foo 2023-07-12 10:58:34,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 10:58:34,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:34,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:34,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 10:58:34,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:34,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:34,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:34,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:34,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.2.10:48434 deadline: 1689159574028, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 10:58:34,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:34,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:34,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:34,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:34,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:34,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:34,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:34,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_anotherGroup 2023-07-12 10:58:34,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:34,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:34,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 10:58:34,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:34,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-12 10:58:34,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 10:58:34,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-12 10:58:34,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-12 10:58:34,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-12 10:58:34,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-12 10:58:34,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:34,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 10:58:34,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 10:58:34,047 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 10:58:34,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-12 10:58:34,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 10:58:34,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 10:58:34,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 10:58:34,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 10:58:34,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:34,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:34,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:46627] to rsgroup master 2023-07-12 10:58:34,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 10:58:34,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:48434 deadline: 1689160714056, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. 2023-07-12 10:58:34,057 WARN [Listener at localhost/34129] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:46627 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 10:58:34,058 INFO [Listener at localhost/34129] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 10:58:34,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-12 10:58:34,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 10:58:34,059 INFO [Listener at localhost/34129] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:34407, jenkins-hbase9.apache.org:35705, jenkins-hbase9.apache.org:40681, jenkins-hbase9.apache.org:41989], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 10:58:34,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-12 10:58:34,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46627] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 10:58:34,076 INFO [Listener at localhost/34129] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570 (was 570), OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=280 (was 280), ProcessCount=170 (was 170), AvailableMemoryMB=7968 (was 7971) 2023-07-12 10:58:34,077 WARN [Listener at localhost/34129] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-12 10:58:34,077 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 10:58:34,077 INFO [Listener at localhost/34129] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x41854f52 to 127.0.0.1:55124 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] util.JVMClusterUtil(257): Found active master hash=413411998, stopped=false 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 10:58:34,077 DEBUG [Listener at localhost/34129] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 10:58:34,077 INFO [Listener at localhost/34129] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 10:58:34,079 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:34,079 INFO [Listener at localhost/34129] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 10:58:34,079 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:34,079 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:34,079 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:34,079 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:34,080 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 10:58:34,080 DEBUG [Listener at localhost/34129] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b6e41fe to 127.0.0.1:55124 2023-07-12 10:58:34,080 DEBUG [Listener at localhost/34129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,34407,1689159510107' ***** 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,35705,1689159510267' ***** 2023-07-12 10:58:34,080 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,40681,1689159510416' ***** 2023-07-12 10:58:34,080 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:34,080 INFO [Listener at localhost/34129] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:34,081 INFO [Listener at localhost/34129] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,41989,1689159511816' ***** 2023-07-12 10:58:34,081 INFO [Listener at localhost/34129] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 10:58:34,081 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:34,081 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:34,087 INFO [RS:2;jenkins-hbase9:40681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6d26fc67{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:34,087 INFO [RS:1;jenkins-hbase9:35705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2bfbb1c9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:34,087 INFO [RS:0;jenkins-hbase9:34407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@76b3ed90{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:34,087 INFO [RS:3;jenkins-hbase9:41989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@484f4c26{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 10:58:34,088 INFO [RS:0;jenkins-hbase9:34407] server.AbstractConnector(383): Stopped ServerConnector@157b8418{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,088 INFO [RS:3;jenkins-hbase9:41989] server.AbstractConnector(383): Stopped ServerConnector@10f687cb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,088 INFO [RS:0;jenkins-hbase9:34407] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:34,088 INFO [RS:1;jenkins-hbase9:35705] server.AbstractConnector(383): Stopped ServerConnector@1972e0d4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,088 INFO [RS:2;jenkins-hbase9:40681] server.AbstractConnector(383): Stopped ServerConnector@3b21876d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,089 INFO [RS:0;jenkins-hbase9:34407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@445cad02{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:34,089 INFO [RS:1;jenkins-hbase9:35705] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:34,088 INFO [RS:3;jenkins-hbase9:41989] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:34,090 INFO [RS:0;jenkins-hbase9:34407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@569e02a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:34,089 INFO [RS:2;jenkins-hbase9:40681] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:34,091 INFO [RS:3;jenkins-hbase9:41989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@123c341c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:34,090 INFO [RS:1;jenkins-hbase9:35705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@9028d76{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:34,092 INFO [RS:3;jenkins-hbase9:41989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f0d392b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:34,093 INFO [RS:1;jenkins-hbase9:35705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@267d594e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:34,092 INFO [RS:2;jenkins-hbase9:40681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6047822b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:34,093 INFO [RS:0;jenkins-hbase9:34407] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:34,094 INFO [RS:2;jenkins-hbase9:40681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@546e1439{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:34,094 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:34,094 INFO [RS:0;jenkins-hbase9:34407] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:34,095 INFO [RS:0;jenkins-hbase9:34407] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:34,095 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(3305): Received CLOSE for 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:34,095 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(3305): Received CLOSE for 34c9471a445b0a39d3a7aea3fe2954ec 2023-07-12 10:58:34,095 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:34,095 DEBUG [RS:0;jenkins-hbase9:34407] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7d6c1e97 to 127.0.0.1:55124 2023-07-12 10:58:34,095 DEBUG [RS:0;jenkins-hbase9:34407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,095 INFO [RS:1;jenkins-hbase9:35705] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:34,095 INFO [RS:3;jenkins-hbase9:41989] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:34,095 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:34,095 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:34,095 INFO [RS:1;jenkins-hbase9:35705] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:34,096 INFO [RS:1;jenkins-hbase9:35705] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:34,095 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 10:58:34,096 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1478): Online Regions={4c57f39da2f5ffa50fe4c4949be814b6=hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6., 34c9471a445b0a39d3a7aea3fe2954ec=hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec.} 2023-07-12 10:58:34,095 INFO [RS:2;jenkins-hbase9:40681] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 10:58:34,096 DEBUG [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1504): Waiting on 34c9471a445b0a39d3a7aea3fe2954ec, 4c57f39da2f5ffa50fe4c4949be814b6 2023-07-12 10:58:34,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 4c57f39da2f5ffa50fe4c4949be814b6, disabling compactions & flushes 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:34,096 DEBUG [RS:2;jenkins-hbase9:40681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x30826121 to 127.0.0.1:55124 2023-07-12 10:58:34,096 DEBUG [RS:2;jenkins-hbase9:40681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:34,096 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 10:58:34,096 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 10:58:34,097 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 10:58:34,096 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:34,095 INFO [RS:3;jenkins-hbase9:41989] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 10:58:34,097 DEBUG [RS:1;jenkins-hbase9:35705] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18eac03c to 127.0.0.1:55124 2023-07-12 10:58:34,097 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 10:58:34,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:34,097 DEBUG [RS:1;jenkins-hbase9:35705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,097 INFO [RS:3;jenkins-hbase9:41989] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 10:58:34,097 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,35705,1689159510267; all regions closed. 2023-07-12 10:58:34,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:34,097 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:34,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. after waiting 0 ms 2023-07-12 10:58:34,097 DEBUG [RS:3;jenkins-hbase9:41989] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08acd93f to 127.0.0.1:55124 2023-07-12 10:58:34,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:34,097 DEBUG [RS:3;jenkins-hbase9:41989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 4c57f39da2f5ffa50fe4c4949be814b6 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 10:58:34,097 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,41989,1689159511816; all regions closed. 2023-07-12 10:58:34,098 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 10:58:34,098 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 10:58:34,098 DEBUG [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 10:58:34,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 10:58:34,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 10:58:34,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 10:58:34,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 10:58:34,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 10:58:34,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-12 10:58:34,102 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/WALs/jenkins-hbase9.apache.org,35705,1689159510267/jenkins-hbase9.apache.org%2C35705%2C1689159510267.1689159510962 not finished, retry = 0 2023-07-12 10:58:34,107 DEBUG [RS:3;jenkins-hbase9:41989] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs 2023-07-12 10:58:34,107 INFO [RS:3;jenkins-hbase9:41989] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41989%2C1689159511816:(num 1689159512191) 2023-07-12 10:58:34,108 DEBUG [RS:3;jenkins-hbase9:41989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,108 INFO [RS:3;jenkins-hbase9:41989] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,112 INFO [RS:3;jenkins-hbase9:41989] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:34,112 INFO [RS:3;jenkins-hbase9:41989] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:34,112 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:34,112 INFO [RS:3;jenkins-hbase9:41989] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:34,112 INFO [RS:3;jenkins-hbase9:41989] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:34,113 INFO [RS:3;jenkins-hbase9:41989] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:41989 2023-07-12 10:58:34,114 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,114 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41989,1689159511816 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,115 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,115 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,41989,1689159511816] 2023-07-12 10:58:34,115 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,41989,1689159511816; numProcessing=1 2023-07-12 10:58:34,116 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,41989,1689159511816 already deleted, retry=false 2023-07-12 10:58:34,117 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,41989,1689159511816 expired; onlineServers=3 2023-07-12 10:58:34,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/.tmp/info/bf6540e9f1c64f8d9955e539c67bde50 2023-07-12 10:58:34,129 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/info/a7592c833a834c84844ed707231ff570 2023-07-12 10:58:34,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf6540e9f1c64f8d9955e539c67bde50 2023-07-12 10:58:34,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/.tmp/info/bf6540e9f1c64f8d9955e539c67bde50 as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/info/bf6540e9f1c64f8d9955e539c67bde50 2023-07-12 10:58:34,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7592c833a834c84844ed707231ff570 2023-07-12 10:58:34,136 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,136 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf6540e9f1c64f8d9955e539c67bde50 2023-07-12 10:58:34,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/info/bf6540e9f1c64f8d9955e539c67bde50, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 10:58:34,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 4c57f39da2f5ffa50fe4c4949be814b6 in 43ms, sequenceid=9, compaction requested=false 2023-07-12 10:58:34,141 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,155 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/namespace/4c57f39da2f5ffa50fe4c4949be814b6/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 10:58:34,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 4c57f39da2f5ffa50fe4c4949be814b6: 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689159511145.4c57f39da2f5ffa50fe4c4949be814b6. 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 34c9471a445b0a39d3a7aea3fe2954ec, disabling compactions & flushes 2023-07-12 10:58:34,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. after waiting 0 ms 2023-07-12 10:58:34,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:34,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 34c9471a445b0a39d3a7aea3fe2954ec 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-12 10:58:34,158 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,159 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/rep_barrier/47270e717daa4df5839ea6ea6a51b2cd 2023-07-12 10:58:34,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47270e717daa4df5839ea6ea6a51b2cd 2023-07-12 10:58:34,182 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/.tmp/m/30c8ae1343c4420b9501a4a152c41099 2023-07-12 10:58:34,187 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/table/fe894e5a422e46ba881c6f16af727ef1 2023-07-12 10:58:34,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 30c8ae1343c4420b9501a4a152c41099 2023-07-12 10:58:34,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/.tmp/m/30c8ae1343c4420b9501a4a152c41099 as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/m/30c8ae1343c4420b9501a4a152c41099 2023-07-12 10:58:34,194 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe894e5a422e46ba881c6f16af727ef1 2023-07-12 10:58:34,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/info/a7592c833a834c84844ed707231ff570 as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/info/a7592c833a834c84844ed707231ff570 2023-07-12 10:58:34,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 30c8ae1343c4420b9501a4a152c41099 2023-07-12 10:58:34,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/m/30c8ae1343c4420b9501a4a152c41099, entries=12, sequenceid=29, filesize=5.4 K 2023-07-12 10:58:34,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 34c9471a445b0a39d3a7aea3fe2954ec in 40ms, sequenceid=29, compaction requested=false 2023-07-12 10:58:34,204 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7592c833a834c84844ed707231ff570 2023-07-12 10:58:34,204 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/info/a7592c833a834c84844ed707231ff570, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 10:58:34,205 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/rep_barrier/47270e717daa4df5839ea6ea6a51b2cd as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/rep_barrier/47270e717daa4df5839ea6ea6a51b2cd 2023-07-12 10:58:34,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/rsgroup/34c9471a445b0a39d3a7aea3fe2954ec/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 10:58:34,206 DEBUG [RS:1;jenkins-hbase9:35705] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs 2023-07-12 10:58:34,206 INFO [RS:1;jenkins-hbase9:35705] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C35705%2C1689159510267:(num 1689159510962) 2023-07-12 10:58:34,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:34,206 DEBUG [RS:1;jenkins-hbase9:35705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,206 INFO [RS:1;jenkins-hbase9:35705] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,206 INFO [RS:1;jenkins-hbase9:35705] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:34,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:34,207 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:34,207 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 34c9471a445b0a39d3a7aea3fe2954ec: 2023-07-12 10:58:34,207 INFO [RS:1;jenkins-hbase9:35705] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:34,208 INFO [RS:1;jenkins-hbase9:35705] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:34,208 INFO [RS:1;jenkins-hbase9:35705] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:34,209 INFO [RS:1;jenkins-hbase9:35705] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:35705 2023-07-12 10:58:34,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689159511312.34c9471a445b0a39d3a7aea3fe2954ec. 2023-07-12 10:58:34,212 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:34,212 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:34,213 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,212 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35705,1689159510267 2023-07-12 10:58:34,214 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,35705,1689159510267] 2023-07-12 10:58:34,214 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,35705,1689159510267; numProcessing=2 2023-07-12 10:58:34,215 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47270e717daa4df5839ea6ea6a51b2cd 2023-07-12 10:58:34,215 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/rep_barrier/47270e717daa4df5839ea6ea6a51b2cd, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 10:58:34,216 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,35705,1689159510267 already deleted, retry=false 2023-07-12 10:58:34,216 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,35705,1689159510267 expired; onlineServers=2 2023-07-12 10:58:34,216 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/.tmp/table/fe894e5a422e46ba881c6f16af727ef1 as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/table/fe894e5a422e46ba881c6f16af727ef1 2023-07-12 10:58:34,222 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe894e5a422e46ba881c6f16af727ef1 2023-07-12 10:58:34,222 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/table/fe894e5a422e46ba881c6f16af727ef1, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 10:58:34,223 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 124ms, sequenceid=26, compaction requested=false 2023-07-12 10:58:34,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 10:58:34,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 10:58:34,236 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:34,236 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 10:58:34,236 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 10:58:34,278 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,278 INFO [RS:3;jenkins-hbase9:41989] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,41989,1689159511816; zookeeper connection closed. 2023-07-12 10:58:34,278 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:41989-0x10159216b37000b, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,279 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@40e7f0db] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@40e7f0db 2023-07-12 10:58:34,296 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,34407,1689159510107; all regions closed. 2023-07-12 10:58:34,298 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,40681,1689159510416; all regions closed. 2023-07-12 10:58:34,304 DEBUG [RS:0;jenkins-hbase9:34407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C34407%2C1689159510107:(num 1689159510962) 2023-07-12 10:58:34,304 DEBUG [RS:0;jenkins-hbase9:34407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 10:58:34,304 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 10:58:34,304 INFO [RS:0;jenkins-hbase9:34407] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 10:58:34,306 INFO [RS:0;jenkins-hbase9:34407] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:34407 2023-07-12 10:58:34,306 DEBUG [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs 2023-07-12 10:58:34,306 INFO [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40681%2C1689159510416.meta:.meta(num 1689159511080) 2023-07-12 10:58:34,309 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:34,309 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,309 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,34407,1689159510107 2023-07-12 10:58:34,309 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,34407,1689159510107] 2023-07-12 10:58:34,309 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,34407,1689159510107; numProcessing=3 2023-07-12 10:58:34,311 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,34407,1689159510107 already deleted, retry=false 2023-07-12 10:58:34,312 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,34407,1689159510107 expired; onlineServers=1 2023-07-12 10:58:34,314 DEBUG [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/oldWALs 2023-07-12 10:58:34,314 INFO [RS:2;jenkins-hbase9:40681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40681%2C1689159510416:(num 1689159510969) 2023-07-12 10:58:34,314 DEBUG [RS:2;jenkins-hbase9:40681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,314 INFO [RS:2;jenkins-hbase9:40681] regionserver.LeaseManager(133): Closed leases 2023-07-12 10:58:34,314 INFO [RS:2;jenkins-hbase9:40681] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 10:58:34,314 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:34,315 INFO [RS:2;jenkins-hbase9:40681] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:40681 2023-07-12 10:58:34,317 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,40681,1689159510416 2023-07-12 10:58:34,317 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 10:58:34,319 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,40681,1689159510416] 2023-07-12 10:58:34,319 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,40681,1689159510416; numProcessing=4 2023-07-12 10:58:34,320 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,40681,1689159510416 already deleted, retry=false 2023-07-12 10:58:34,320 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,40681,1689159510416 expired; onlineServers=0 2023-07-12 10:58:34,320 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,46627,1689159509932' ***** 2023-07-12 10:58:34,320 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 10:58:34,321 DEBUG [M:0;jenkins-hbase9:46627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c9bad52, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-12 10:58:34,321 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 10:58:34,324 INFO [M:0;jenkins-hbase9:46627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@375d08da{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 10:58:34,324 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 10:58:34,324 INFO [M:0;jenkins-hbase9:46627] server.AbstractConnector(383): Stopped ServerConnector@66a56199{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,324 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 10:58:34,324 INFO [M:0;jenkins-hbase9:46627] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 10:58:34,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 10:58:34,325 INFO [M:0;jenkins-hbase9:46627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@fb17cb5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 10:58:34,326 INFO [M:0;jenkins-hbase9:46627] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42f238c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/hadoop.log.dir/,STOPPED} 2023-07-12 10:58:34,326 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,46627,1689159509932 2023-07-12 10:58:34,326 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,46627,1689159509932; all regions closed. 2023-07-12 10:58:34,326 DEBUG [M:0;jenkins-hbase9:46627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 10:58:34,326 INFO [M:0;jenkins-hbase9:46627] master.HMaster(1491): Stopping master jetty server 2023-07-12 10:58:34,327 INFO [M:0;jenkins-hbase9:46627] server.AbstractConnector(383): Stopped ServerConnector@25cd0fa1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 10:58:34,327 DEBUG [M:0;jenkins-hbase9:46627] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 10:58:34,327 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 10:58:34,327 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159510718] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689159510718,5,FailOnTimeoutGroup] 2023-07-12 10:58:34,327 DEBUG [M:0;jenkins-hbase9:46627] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 10:58:34,327 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159510718] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689159510718,5,FailOnTimeoutGroup] 2023-07-12 10:58:34,327 INFO [M:0;jenkins-hbase9:46627] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 10:58:34,327 INFO [M:0;jenkins-hbase9:46627] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 10:58:34,327 INFO [M:0;jenkins-hbase9:46627] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-12 10:58:34,327 DEBUG [M:0;jenkins-hbase9:46627] master.HMaster(1512): Stopping service threads 2023-07-12 10:58:34,328 INFO [M:0;jenkins-hbase9:46627] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 10:58:34,328 ERROR [M:0;jenkins-hbase9:46627] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 10:58:34,328 INFO [M:0;jenkins-hbase9:46627] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 10:58:34,328 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 10:58:34,328 DEBUG [M:0;jenkins-hbase9:46627] zookeeper.ZKUtil(398): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 10:58:34,328 WARN [M:0;jenkins-hbase9:46627] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 10:58:34,328 INFO [M:0;jenkins-hbase9:46627] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 10:58:34,328 INFO [M:0;jenkins-hbase9:46627] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 10:58:34,329 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 10:58:34,329 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:34,329 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:34,329 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 10:58:34,329 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:34,329 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.66 KB 2023-07-12 10:58:34,339 INFO [M:0;jenkins-hbase9:46627] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/472d69b228bc40b0ac3d245629e04c08 2023-07-12 10:58:34,345 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/472d69b228bc40b0ac3d245629e04c08 as hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/472d69b228bc40b0ac3d245629e04c08 2023-07-12 10:58:34,349 INFO [M:0;jenkins-hbase9:46627] regionserver.HStore(1080): Added hdfs://localhost:32939/user/jenkins/test-data/1f1e271a-1734-5b43-b46d-56b2fc9621ab/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/472d69b228bc40b0ac3d245629e04c08, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 10:58:34,350 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78011, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-12 10:58:34,352 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 10:58:34,352 DEBUG [M:0;jenkins-hbase9:46627] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 10:58:34,355 INFO [M:0;jenkins-hbase9:46627] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 10:58:34,355 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 10:58:34,356 INFO [M:0;jenkins-hbase9:46627] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:46627 2023-07-12 10:58:34,357 DEBUG [M:0;jenkins-hbase9:46627] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,46627,1689159509932 already deleted, retry=false 2023-07-12 10:58:34,880 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,880 INFO [M:0;jenkins-hbase9:46627] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,46627,1689159509932; zookeeper connection closed. 2023-07-12 10:58:34,880 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): master:46627-0x10159216b370000, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,980 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,980 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:40681-0x10159216b370003, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:34,980 INFO [RS:2;jenkins-hbase9:40681] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,40681,1689159510416; zookeeper connection closed. 2023-07-12 10:58:34,981 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4af09c4f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4af09c4f 2023-07-12 10:58:35,081 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:35,081 INFO [RS:0;jenkins-hbase9:34407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,34407,1689159510107; zookeeper connection closed. 2023-07-12 10:58:35,081 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:34407-0x10159216b370001, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:35,081 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17e9eeea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17e9eeea 2023-07-12 10:58:35,181 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:35,181 INFO [RS:1;jenkins-hbase9:35705] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,35705,1689159510267; zookeeper connection closed. 2023-07-12 10:58:35,181 DEBUG [Listener at localhost/34129-EventThread] zookeeper.ZKWatcher(600): regionserver:35705-0x10159216b370002, quorum=127.0.0.1:55124, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 10:58:35,181 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@760b71b7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@760b71b7 2023-07-12 10:58:35,181 INFO [Listener at localhost/34129] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 10:58:35,182 WARN [Listener at localhost/34129] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:35,185 INFO [Listener at localhost/34129] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:35,288 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:35,288 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1956518920-172.31.2.10-1689159509229 (Datanode Uuid 39ccd5f1-2f92-4056-9583-ee03dbf54a67) service to localhost/127.0.0.1:32939 2023-07-12 10:58:35,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data5/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data6/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,290 WARN [Listener at localhost/34129] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:35,293 INFO [Listener at localhost/34129] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:35,395 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:35,395 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1956518920-172.31.2.10-1689159509229 (Datanode Uuid 2250feb9-cc06-4f68-adcc-d5e0e1809ccb) service to localhost/127.0.0.1:32939 2023-07-12 10:58:35,396 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data3/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,396 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data4/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,397 WARN [Listener at localhost/34129] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 10:58:35,400 INFO [Listener at localhost/34129] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:35,502 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 10:58:35,502 WARN [BP-1956518920-172.31.2.10-1689159509229 heartbeating to localhost/127.0.0.1:32939] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1956518920-172.31.2.10-1689159509229 (Datanode Uuid 9bf4664a-adcf-4f06-b2dd-b7ede5a070f8) service to localhost/127.0.0.1:32939 2023-07-12 10:58:35,503 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data1/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,503 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/49e285dc-ac7b-d3f4-6a9d-38636eacc2c4/cluster_f979ccf7-6780-59e2-005a-6696e4a61f98/dfs/data/data2/current/BP-1956518920-172.31.2.10-1689159509229] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 10:58:35,513 INFO [Listener at localhost/34129] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 10:58:35,627 INFO [Listener at localhost/34129] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 10:58:35,652 INFO [Listener at localhost/34129] hbase.HBaseTestingUtility(1293): Minicluster is down