2023-07-19 21:14:56,631 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452 2023-07-19 21:14:56,650 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-19 21:14:56,676 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 21:14:56,677 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520, deleteOnExit=true 2023-07-19 21:14:56,677 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 21:14:56,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/test.cache.data in system properties and HBase conf 2023-07-19 21:14:56,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 21:14:56,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir in system properties and HBase conf 2023-07-19 21:14:56,680 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 21:14:56,680 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 21:14:56,680 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 21:14:56,796 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-19 21:14:57,282 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 21:14:57,287 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:14:57,287 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:14:57,288 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 21:14:57,288 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:14:57,288 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 21:14:57,289 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 21:14:57,289 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:14:57,289 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:14:57,290 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 21:14:57,290 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/nfs.dump.dir in system properties and HBase conf 2023-07-19 21:14:57,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir in system properties and HBase conf 2023-07-19 21:14:57,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:14:57,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 21:14:57,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 21:14:58,008 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:14:58,013 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:14:58,301 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-19 21:14:58,466 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-19 21:14:58,481 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:14:58,522 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:14:58,558 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/Jetty_localhost_36209_hdfs____hvhfz6/webapp 2023-07-19 21:14:58,702 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36209 2023-07-19 21:14:58,714 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:14:58,715 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:14:59,180 WARN [Listener at localhost/40615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:14:59,266 WARN [Listener at localhost/40615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:14:59,297 WARN [Listener at localhost/40615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:14:59,307 INFO [Listener at localhost/40615] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:14:59,316 INFO [Listener at localhost/40615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/Jetty_localhost_36305_datanode____.9d7azo/webapp 2023-07-19 21:14:59,442 INFO [Listener at localhost/40615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36305 2023-07-19 21:14:59,927 WARN [Listener at localhost/42759] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:14:59,992 WARN [Listener at localhost/42759] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:14:59,998 WARN [Listener at localhost/42759] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:00,000 INFO [Listener at localhost/42759] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:00,005 INFO [Listener at localhost/42759] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/Jetty_localhost_34087_datanode____.1t2nl7/webapp 2023-07-19 21:15:00,113 INFO [Listener at localhost/42759] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34087 2023-07-19 21:15:00,134 WARN [Listener at localhost/43627] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:00,168 WARN [Listener at localhost/43627] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:00,172 WARN [Listener at localhost/43627] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:00,174 INFO [Listener at localhost/43627] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:00,182 INFO [Listener at localhost/43627] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/Jetty_localhost_41657_datanode____7hiw4i/webapp 2023-07-19 21:15:00,317 INFO [Listener at localhost/43627] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41657 2023-07-19 21:15:00,341 WARN [Listener at localhost/39507] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:00,541 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ec454f5567a7b02: Processing first storage report for DS-4af41a0e-57fd-4734-935f-88dc86c0119f from datanode 675a1942-e574-4708-8086-d20f31149659 2023-07-19 21:15:00,542 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ec454f5567a7b02: from storage DS-4af41a0e-57fd-4734-935f-88dc86c0119f node DatanodeRegistration(127.0.0.1:43895, datanodeUuid=675a1942-e574-4708-8086-d20f31149659, infoPort=45857, infoSecurePort=0, ipcPort=39507, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,542 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe48d39372dd92a4: Processing first storage report for DS-7f25dcb0-5556-4324-834d-aa6465a78e8b from datanode 3ea9533e-7ee4-447a-994e-c604c2effdda 2023-07-19 21:15:00,542 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe48d39372dd92a4: from storage DS-7f25dcb0-5556-4324-834d-aa6465a78e8b node DatanodeRegistration(127.0.0.1:37103, datanodeUuid=3ea9533e-7ee4-447a-994e-c604c2effdda, infoPort=33685, infoSecurePort=0, ipcPort=42759, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8baa96fb508c47d3: Processing first storage report for DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779 from datanode 3bf1676a-6788-4615-a0cc-a155abb7b2b2 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8baa96fb508c47d3: from storage DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779 node DatanodeRegistration(127.0.0.1:36045, datanodeUuid=3bf1676a-6788-4615-a0cc-a155abb7b2b2, infoPort=43163, infoSecurePort=0, ipcPort=43627, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ec454f5567a7b02: Processing first storage report for DS-192905f9-1a9d-44d3-b3b1-bf1e992a9e4b from datanode 675a1942-e574-4708-8086-d20f31149659 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ec454f5567a7b02: from storage DS-192905f9-1a9d-44d3-b3b1-bf1e992a9e4b node DatanodeRegistration(127.0.0.1:43895, datanodeUuid=675a1942-e574-4708-8086-d20f31149659, infoPort=45857, infoSecurePort=0, ipcPort=39507, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe48d39372dd92a4: Processing first storage report for DS-fd0f4632-17f9-4931-a1b4-26a1aacd28d2 from datanode 3ea9533e-7ee4-447a-994e-c604c2effdda 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe48d39372dd92a4: from storage DS-fd0f4632-17f9-4931-a1b4-26a1aacd28d2 node DatanodeRegistration(127.0.0.1:37103, datanodeUuid=3ea9533e-7ee4-447a-994e-c604c2effdda, infoPort=33685, infoSecurePort=0, ipcPort=42759, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8baa96fb508c47d3: Processing first storage report for DS-f1101072-b7c3-4136-a506-7aa27bc5b616 from datanode 3bf1676a-6788-4615-a0cc-a155abb7b2b2 2023-07-19 21:15:00,543 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8baa96fb508c47d3: from storage DS-f1101072-b7c3-4136-a506-7aa27bc5b616 node DatanodeRegistration(127.0.0.1:36045, datanodeUuid=3bf1676a-6788-4615-a0cc-a155abb7b2b2, infoPort=43163, infoSecurePort=0, ipcPort=43627, storageInfo=lv=-57;cid=testClusterID;nsid=434066900;c=1689801298089), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-19 21:15:00,753 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452 2023-07-19 21:15:00,869 INFO [Listener at localhost/39507] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/zookeeper_0, clientPort=58627, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 21:15:00,893 INFO [Listener at localhost/39507] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58627 2023-07-19 21:15:00,904 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:00,907 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:01,254 INFO [Listener at localhost/39507] util.FSUtils(471): Created version file at hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 with version=8 2023-07-19 21:15:01,254 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/hbase-staging 2023-07-19 21:15:01,263 DEBUG [Listener at localhost/39507] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 21:15:01,263 DEBUG [Listener at localhost/39507] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 21:15:01,263 DEBUG [Listener at localhost/39507] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 21:15:01,263 DEBUG [Listener at localhost/39507] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 21:15:01,650 INFO [Listener at localhost/39507] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-19 21:15:02,235 INFO [Listener at localhost/39507] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:02,277 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:02,277 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:02,277 INFO [Listener at localhost/39507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:02,277 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:02,278 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:02,434 INFO [Listener at localhost/39507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:02,511 DEBUG [Listener at localhost/39507] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-19 21:15:02,625 INFO [Listener at localhost/39507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36267 2023-07-19 21:15:02,638 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:02,640 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:02,671 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36267 connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:02,732 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:362670x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:02,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36267-0x1017f701e770000 connected 2023-07-19 21:15:02,770 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:02,771 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:02,775 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:02,786 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36267 2023-07-19 21:15:02,786 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36267 2023-07-19 21:15:02,787 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36267 2023-07-19 21:15:02,787 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36267 2023-07-19 21:15:02,788 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36267 2023-07-19 21:15:02,821 INFO [Listener at localhost/39507] log.Log(170): Logging initialized @6983ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-19 21:15:02,958 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:02,959 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:02,959 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:02,961 INFO [Listener at localhost/39507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 21:15:02,962 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:02,962 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:02,965 INFO [Listener at localhost/39507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:03,031 INFO [Listener at localhost/39507] http.HttpServer(1146): Jetty bound to port 39695 2023-07-19 21:15:03,032 INFO [Listener at localhost/39507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:03,060 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,063 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2db17b81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:03,064 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,064 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1a079e3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:03,242 INFO [Listener at localhost/39507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:03,256 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:03,256 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:03,258 INFO [Listener at localhost/39507] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:03,267 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,297 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@47177c10{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/jetty-0_0_0_0-39695-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4348936479161701423/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:03,310 INFO [Listener at localhost/39507] server.AbstractConnector(333): Started ServerConnector@cbd2559{HTTP/1.1, (http/1.1)}{0.0.0.0:39695} 2023-07-19 21:15:03,310 INFO [Listener at localhost/39507] server.Server(415): Started @7473ms 2023-07-19 21:15:03,315 INFO [Listener at localhost/39507] master.HMaster(444): hbase.rootdir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769, hbase.cluster.distributed=false 2023-07-19 21:15:03,415 INFO [Listener at localhost/39507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:03,416 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,416 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,416 INFO [Listener at localhost/39507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:03,416 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,416 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:03,422 INFO [Listener at localhost/39507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:03,425 INFO [Listener at localhost/39507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33985 2023-07-19 21:15:03,428 INFO [Listener at localhost/39507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:03,436 DEBUG [Listener at localhost/39507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:03,437 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,439 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,441 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33985 connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:03,449 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:339850x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:03,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33985-0x1017f701e770001 connected 2023-07-19 21:15:03,451 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:03,453 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:03,454 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:03,455 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33985 2023-07-19 21:15:03,455 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33985 2023-07-19 21:15:03,456 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33985 2023-07-19 21:15:03,456 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33985 2023-07-19 21:15:03,457 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33985 2023-07-19 21:15:03,460 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:03,460 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:03,461 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:03,462 INFO [Listener at localhost/39507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:03,462 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:03,463 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:03,463 INFO [Listener at localhost/39507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:03,465 INFO [Listener at localhost/39507] http.HttpServer(1146): Jetty bound to port 36311 2023-07-19 21:15:03,466 INFO [Listener at localhost/39507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:03,472 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,473 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b56872c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:03,473 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,473 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@180451a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:03,616 INFO [Listener at localhost/39507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:03,617 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:03,617 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:03,617 INFO [Listener at localhost/39507] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:03,619 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,623 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@75640050{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/jetty-0_0_0_0-36311-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6135038677419589709/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:03,624 INFO [Listener at localhost/39507] server.AbstractConnector(333): Started ServerConnector@4aa1e459{HTTP/1.1, (http/1.1)}{0.0.0.0:36311} 2023-07-19 21:15:03,624 INFO [Listener at localhost/39507] server.Server(415): Started @7787ms 2023-07-19 21:15:03,641 INFO [Listener at localhost/39507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:03,641 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,641 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,642 INFO [Listener at localhost/39507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:03,642 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,642 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:03,642 INFO [Listener at localhost/39507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:03,644 INFO [Listener at localhost/39507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45225 2023-07-19 21:15:03,644 INFO [Listener at localhost/39507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:03,645 DEBUG [Listener at localhost/39507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:03,646 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,648 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,649 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45225 connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:03,652 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:452250x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:03,654 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45225-0x1017f701e770002 connected 2023-07-19 21:15:03,654 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:03,655 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:03,656 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:03,656 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45225 2023-07-19 21:15:03,656 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45225 2023-07-19 21:15:03,657 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45225 2023-07-19 21:15:03,657 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45225 2023-07-19 21:15:03,657 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45225 2023-07-19 21:15:03,660 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:03,660 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:03,660 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:03,661 INFO [Listener at localhost/39507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:03,661 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:03,661 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:03,661 INFO [Listener at localhost/39507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:03,662 INFO [Listener at localhost/39507] http.HttpServer(1146): Jetty bound to port 42693 2023-07-19 21:15:03,662 INFO [Listener at localhost/39507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:03,663 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,664 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e84c820{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:03,664 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,664 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34301e2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:03,797 INFO [Listener at localhost/39507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:03,798 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:03,798 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:03,798 INFO [Listener at localhost/39507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:03,799 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,800 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7eecefb7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/jetty-0_0_0_0-42693-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4786983172111739164/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:03,801 INFO [Listener at localhost/39507] server.AbstractConnector(333): Started ServerConnector@7394d09{HTTP/1.1, (http/1.1)}{0.0.0.0:42693} 2023-07-19 21:15:03,802 INFO [Listener at localhost/39507] server.Server(415): Started @7965ms 2023-07-19 21:15:03,816 INFO [Listener at localhost/39507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:03,816 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,817 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,817 INFO [Listener at localhost/39507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:03,817 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:03,817 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:03,817 INFO [Listener at localhost/39507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:03,819 INFO [Listener at localhost/39507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33539 2023-07-19 21:15:03,819 INFO [Listener at localhost/39507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:03,821 DEBUG [Listener at localhost/39507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:03,822 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,823 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:03,825 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33539 connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:03,828 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:335390x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:03,830 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33539-0x1017f701e770003 connected 2023-07-19 21:15:03,830 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:03,831 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:03,832 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:03,832 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33539 2023-07-19 21:15:03,833 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33539 2023-07-19 21:15:03,833 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33539 2023-07-19 21:15:03,834 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33539 2023-07-19 21:15:03,834 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33539 2023-07-19 21:15:03,836 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:03,837 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:03,837 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:03,837 INFO [Listener at localhost/39507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:03,837 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:03,838 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:03,838 INFO [Listener at localhost/39507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:03,839 INFO [Listener at localhost/39507] http.HttpServer(1146): Jetty bound to port 38109 2023-07-19 21:15:03,839 INFO [Listener at localhost/39507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:03,841 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,841 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b0e15fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:03,842 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,842 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@22b75a27{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:03,961 INFO [Listener at localhost/39507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:03,962 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:03,962 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:03,963 INFO [Listener at localhost/39507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:03,964 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:03,965 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@60f62ff2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/jetty-0_0_0_0-38109-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2295084453950460773/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:03,966 INFO [Listener at localhost/39507] server.AbstractConnector(333): Started ServerConnector@447a00c4{HTTP/1.1, (http/1.1)}{0.0.0.0:38109} 2023-07-19 21:15:03,966 INFO [Listener at localhost/39507] server.Server(415): Started @8129ms 2023-07-19 21:15:03,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:03,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@626aec64{HTTP/1.1, (http/1.1)}{0.0.0.0:42843} 2023-07-19 21:15:03,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8142ms 2023-07-19 21:15:03,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:03,989 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:03,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,013 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:04,013 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:04,013 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:04,013 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:04,015 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:04,018 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:04,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36267,1689801301454 from backup master directory 2023-07-19 21:15:04,023 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,023 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:04,024 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:04,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-19 21:15:04,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-19 21:15:04,134 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/hbase.id with ID: 04d140fc-999b-49d1-9db4-bb9fac47eabb 2023-07-19 21:15:04,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:04,200 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7a5d5391 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:04,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@471f70e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:04,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:04,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 21:15:04,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-19 21:15:04,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-19 21:15:04,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 21:15:04,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-19 21:15:04,343 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:04,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store-tmp 2023-07-19 21:15:04,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:04,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:04,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:04,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:04,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:04,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:04,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:04,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:04,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/WALs/jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36267%2C1689801301454, suffix=, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/WALs/jenkins-hbase4.apache.org,36267,1689801301454, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/oldWALs, maxLogs=10 2023-07-19 21:15:04,523 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:04,523 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:04,523 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:04,532 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-19 21:15:04,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/WALs/jenkins-hbase4.apache.org,36267,1689801301454/jenkins-hbase4.apache.org%2C36267%2C1689801301454.1689801304469 2023-07-19 21:15:04,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK]] 2023-07-19 21:15:04,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:04,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:04,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,686 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,693 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 21:15:04,729 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 21:15:04,742 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:04,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,774 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:04,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:04,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11346461440, jitterRate=0.05672156810760498}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:04,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:04,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 21:15:04,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 21:15:04,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 21:15:04,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 21:15:04,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-19 21:15:04,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 43 msec 2023-07-19 21:15:04,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 21:15:04,889 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 21:15:04,895 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 21:15:04,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 21:15:04,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 21:15:04,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 21:15:04,917 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 21:15:04,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 21:15:04,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 21:15:04,941 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:04,941 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:04,941 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,941 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:04,941 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:04,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36267,1689801301454, sessionid=0x1017f701e770000, setting cluster-up flag (Was=false) 2023-07-19 21:15:04,961 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 21:15:04,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,974 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:04,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 21:15:04,983 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:04,986 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.hbase-snapshot/.tmp 2023-07-19 21:15:05,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 21:15:05,076 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(951): ClusterId : 04d140fc-999b-49d1-9db4-bb9fac47eabb 2023-07-19 21:15:05,079 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(951): ClusterId : 04d140fc-999b-49d1-9db4-bb9fac47eabb 2023-07-19 21:15:05,079 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(951): ClusterId : 04d140fc-999b-49d1-9db4-bb9fac47eabb 2023-07-19 21:15:05,086 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 21:15:05,088 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:05,088 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:05,088 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:05,093 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:05,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 21:15:05,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 21:15:05,097 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:05,097 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:05,097 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:05,098 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:05,097 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:05,099 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:05,104 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:05,104 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:05,104 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:05,107 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ReadOnlyZKClient(139): Connect 0x6e372e56 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:05,112 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ReadOnlyZKClient(139): Connect 0x19ec330d to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:05,113 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ReadOnlyZKClient(139): Connect 0x276cf5d2 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:05,131 DEBUG [RS:2;jenkins-hbase4:33539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@722bfae7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:05,132 DEBUG [RS:2;jenkins-hbase4:33539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45198ac0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:05,136 DEBUG [RS:0;jenkins-hbase4:33985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19071a09, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:05,136 DEBUG [RS:1;jenkins-hbase4:45225] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66c3601, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:05,136 DEBUG [RS:0;jenkins-hbase4:33985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@355ce9a8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:05,137 DEBUG [RS:1;jenkins-hbase4:45225] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c41e5d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:05,201 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:45225 2023-07-19 21:15:05,202 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33539 2023-07-19 21:15:05,204 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33985 2023-07-19 21:15:05,207 INFO [RS:1;jenkins-hbase4:45225] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:05,208 INFO [RS:1;jenkins-hbase4:45225] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:05,207 INFO [RS:0;jenkins-hbase4:33985] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:05,208 INFO [RS:0;jenkins-hbase4:33985] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:05,207 INFO [RS:2;jenkins-hbase4:33539] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:05,208 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:05,208 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:05,208 INFO [RS:2;jenkins-hbase4:33539] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:05,209 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:05,212 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:45225, startcode=1689801303640 2023-07-19 21:15:05,212 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:33539, startcode=1689801303815 2023-07-19 21:15:05,212 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:33985, startcode=1689801303414 2023-07-19 21:15:05,235 DEBUG [RS:0;jenkins-hbase4:33985] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:05,235 DEBUG [RS:2;jenkins-hbase4:33539] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:05,237 DEBUG [RS:1;jenkins-hbase4:45225] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:05,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:05,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:05,312 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:05,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:05,313 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32925, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:05,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:05,313 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42783, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:05,313 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48365, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:05,319 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:05,319 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:05,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:05,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:05,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 21:15:05,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:05,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689801335327 2023-07-19 21:15:05,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 21:15:05,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 21:15:05,344 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:05,346 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 21:15:05,344 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:05,350 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 21:15:05,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 21:15:05,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 21:15:05,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 21:15:05,353 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:05,357 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:05,358 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:05,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 21:15:05,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 21:15:05,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 21:15:05,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 21:15:05,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 21:15:05,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801305406,5,FailOnTimeoutGroup] 2023-07-19 21:15:05,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801305406,5,FailOnTimeoutGroup] 2023-07-19 21:15:05,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 21:15:05,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,443 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 21:15:05,444 WARN [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 21:15:05,446 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 21:15:05,446 WARN [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 21:15:05,446 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(2830): Master is not running yet 2023-07-19 21:15:05,446 WARN [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-19 21:15:05,473 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:05,474 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:05,475 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 2023-07-19 21:15:05,539 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:05,545 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:33985, startcode=1689801303414 2023-07-19 21:15:05,547 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:33539, startcode=1689801303815 2023-07-19 21:15:05,547 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:45225, startcode=1689801303640 2023-07-19 21:15:05,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:05,555 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:05,557 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:05,558 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:05,560 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 21:15:05,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:05,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:05,567 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:05,568 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:05,570 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:05,571 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:05,573 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,576 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:05,576 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 21:15:05,577 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:05,578 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,578 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:05,578 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 21:15:05,578 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:05,579 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 2023-07-19 21:15:05,579 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40615 2023-07-19 21:15:05,579 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39695 2023-07-19 21:15:05,580 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 2023-07-19 21:15:05,581 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 2023-07-19 21:15:05,581 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40615 2023-07-19 21:15:05,581 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39695 2023-07-19 21:15:05,581 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40615 2023-07-19 21:15:05,581 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39695 2023-07-19 21:15:05,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:05,590 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:05,591 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:05,593 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:05,594 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,594 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,594 WARN [RS:0;jenkins-hbase4:33985] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:05,595 INFO [RS:0;jenkins-hbase4:33985] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:05,595 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,595 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,597 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:05,594 WARN [RS:2;jenkins-hbase4:33539] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:05,595 WARN [RS:1;jenkins-hbase4:45225] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:05,601 INFO [RS:2;jenkins-hbase4:33539] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:05,605 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,601 INFO [RS:1;jenkins-hbase4:45225] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:05,606 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33539,1689801303815] 2023-07-19 21:15:05,606 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,607 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45225,1689801303640] 2023-07-19 21:15:05,607 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33985,1689801303414] 2023-07-19 21:15:05,613 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:05,636 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:05,639 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11389164480, jitterRate=0.06069859862327576}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:05,639 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:05,639 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:05,639 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:05,639 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:05,639 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:05,639 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:05,642 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:05,642 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:05,643 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,643 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,643 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,645 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,645 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,646 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,646 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,649 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,651 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,651 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:05,651 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 21:15:05,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 21:15:05,664 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:05,664 DEBUG [RS:0;jenkins-hbase4:33985] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:05,667 DEBUG [RS:2;jenkins-hbase4:33539] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:05,678 INFO [RS:2;jenkins-hbase4:33539] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:05,678 INFO [RS:0;jenkins-hbase4:33985] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:05,679 INFO [RS:1;jenkins-hbase4:45225] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:05,683 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 21:15:05,687 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 21:15:05,706 INFO [RS:1;jenkins-hbase4:45225] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:05,706 INFO [RS:2;jenkins-hbase4:33539] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:05,708 INFO [RS:0;jenkins-hbase4:33985] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:05,715 INFO [RS:0;jenkins-hbase4:33985] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:05,715 INFO [RS:1;jenkins-hbase4:45225] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:05,715 INFO [RS:2;jenkins-hbase4:33539] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:05,720 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,717 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,720 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,721 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:05,721 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:05,721 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:05,740 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,740 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,740 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,749 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,740 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:05,750 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,749 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,750 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:05,750 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:05,751 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:0;jenkins-hbase4:33985] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,751 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,752 DEBUG [RS:2;jenkins-hbase4:33539] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,753 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,755 DEBUG [RS:1;jenkins-hbase4:45225] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:05,758 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,758 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,758 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,758 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,758 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,758 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,763 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,763 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,763 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,780 INFO [RS:2;jenkins-hbase4:33539] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:05,780 INFO [RS:1;jenkins-hbase4:45225] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:05,780 INFO [RS:0;jenkins-hbase4:33985] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:05,784 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33539,1689801303815-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,784 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45225,1689801303640-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,784 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33985,1689801303414-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:05,811 INFO [RS:1;jenkins-hbase4:45225] regionserver.Replication(203): jenkins-hbase4.apache.org,45225,1689801303640 started 2023-07-19 21:15:05,811 INFO [RS:2;jenkins-hbase4:33539] regionserver.Replication(203): jenkins-hbase4.apache.org,33539,1689801303815 started 2023-07-19 21:15:05,811 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33539,1689801303815, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33539, sessionid=0x1017f701e770003 2023-07-19 21:15:05,811 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45225,1689801303640, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45225, sessionid=0x1017f701e770002 2023-07-19 21:15:05,813 INFO [RS:0;jenkins-hbase4:33985] regionserver.Replication(203): jenkins-hbase4.apache.org,33985,1689801303414 started 2023-07-19 21:15:05,813 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33985,1689801303414, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33985, sessionid=0x1017f701e770001 2023-07-19 21:15:05,813 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:05,813 DEBUG [RS:0;jenkins-hbase4:33985] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,813 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33985,1689801303414' 2023-07-19 21:15:05,813 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:05,813 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:05,813 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:05,815 DEBUG [RS:1;jenkins-hbase4:45225] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,814 DEBUG [RS:2;jenkins-hbase4:33539] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,815 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:05,816 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:05,816 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:05,815 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45225,1689801303640' 2023-07-19 21:15:05,817 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:05,816 DEBUG [RS:0;jenkins-hbase4:33985] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:05,815 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33539,1689801303815' 2023-07-19 21:15:05,817 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:05,817 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33985,1689801303414' 2023-07-19 21:15:05,817 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:05,817 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:05,818 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:05,818 DEBUG [RS:0;jenkins-hbase4:33985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:05,818 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:05,818 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:05,818 DEBUG [RS:1;jenkins-hbase4:45225] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:05,819 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45225,1689801303640' 2023-07-19 21:15:05,819 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:05,819 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:05,819 DEBUG [RS:0;jenkins-hbase4:33985] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:05,819 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:05,819 DEBUG [RS:2;jenkins-hbase4:33539] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:05,819 DEBUG [RS:1;jenkins-hbase4:45225] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:05,819 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33539,1689801303815' 2023-07-19 21:15:05,819 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:05,819 INFO [RS:0;jenkins-hbase4:33985] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:05,820 INFO [RS:0;jenkins-hbase4:33985] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:05,820 DEBUG [RS:1;jenkins-hbase4:45225] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:05,820 INFO [RS:1;jenkins-hbase4:45225] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:05,820 INFO [RS:1;jenkins-hbase4:45225] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:05,822 DEBUG [RS:2;jenkins-hbase4:33539] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:05,822 DEBUG [RS:2;jenkins-hbase4:33539] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:05,822 INFO [RS:2;jenkins-hbase4:33539] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:05,822 INFO [RS:2;jenkins-hbase4:33539] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:05,839 DEBUG [jenkins-hbase4:36267] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 21:15:05,855 DEBUG [jenkins-hbase4:36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:05,856 DEBUG [jenkins-hbase4:36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:05,857 DEBUG [jenkins-hbase4:36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:05,857 DEBUG [jenkins-hbase4:36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:05,857 DEBUG [jenkins-hbase4:36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:05,861 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33539,1689801303815, state=OPENING 2023-07-19 21:15:05,871 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 21:15:05,874 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:05,875 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:05,879 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:05,921 WARN [ReadOnlyZKClient-127.0.0.1:58627@0x7a5d5391] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 21:15:05,936 INFO [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45225%2C1689801303640, suffix=, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:05,939 INFO [RS:0;jenkins-hbase4:33985] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33985%2C1689801303414, suffix=, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33985,1689801303414, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:05,943 INFO [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33539%2C1689801303815, suffix=, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33539,1689801303815, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:05,974 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:05,974 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:05,976 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:05,982 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:05,982 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:05,983 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:05,988 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:05,991 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50000, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:05,992 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33539] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50000 deadline: 1689801365991, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:06,002 INFO [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640/jenkins-hbase4.apache.org%2C45225%2C1689801303640.1689801305939 2023-07-19 21:15:06,002 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:06,004 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:06,006 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:06,006 DEBUG [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK]] 2023-07-19 21:15:06,016 INFO [RS:0;jenkins-hbase4:33985] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33985,1689801303414/jenkins-hbase4.apache.org%2C33985%2C1689801303414.1689801305941 2023-07-19 21:15:06,023 DEBUG [RS:0;jenkins-hbase4:33985] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK]] 2023-07-19 21:15:06,035 INFO [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33539,1689801303815/jenkins-hbase4.apache.org%2C33539%2C1689801303815.1689801305945 2023-07-19 21:15:06,042 DEBUG [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK], DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK]] 2023-07-19 21:15:06,067 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:06,071 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:06,077 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:06,090 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 21:15:06,091 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:06,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33539%2C1689801303815.meta, suffix=.meta, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33539,1689801303815, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:06,114 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:06,116 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:06,116 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:06,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,33539,1689801303815/jenkins-hbase4.apache.org%2C33539%2C1689801303815.meta.1689801306096.meta 2023-07-19 21:15:06,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK]] 2023-07-19 21:15:06,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:06,131 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:06,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 21:15:06,136 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 21:15:06,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 21:15:06,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:06,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 21:15:06,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 21:15:06,147 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:06,149 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:06,150 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:06,150 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:06,151 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:06,151 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:06,153 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:06,153 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:06,153 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:06,154 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:06,154 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:06,155 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:06,155 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:06,156 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:06,157 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:06,158 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:06,161 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:06,164 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:06,166 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:06,168 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11143632320, jitterRate=0.037831634283065796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:06,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:06,178 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689801306062 2023-07-19 21:15:06,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 21:15:06,198 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 21:15:06,198 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33539,1689801303815, state=OPEN 2023-07-19 21:15:06,202 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:06,202 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:06,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 21:15:06,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33539,1689801303815 in 323 msec 2023-07-19 21:15:06,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 21:15:06,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 544 msec 2023-07-19 21:15:06,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1100 sec 2023-07-19 21:15:06,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689801306216, completionTime=-1 2023-07-19 21:15:06,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 21:15:06,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 21:15:06,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 21:15:06,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689801366274 2023-07-19 21:15:06,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689801426274 2023-07-19 21:15:06,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 57 msec 2023-07-19 21:15:06,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36267,1689801301454-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:06,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36267,1689801301454-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:06,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36267,1689801301454-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:06,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36267, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:06,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:06,309 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 21:15:06,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 21:15:06,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:06,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 21:15:06,336 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:06,339 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:06,360 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,364 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06 empty. 2023-07-19 21:15:06,365 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,365 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 21:15:06,420 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:06,423 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4579bff74bc250630a8bf94138cfbe06, NAME => 'hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:06,451 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:06,452 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4579bff74bc250630a8bf94138cfbe06, disabling compactions & flushes 2023-07-19 21:15:06,452 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,452 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,452 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. after waiting 0 ms 2023-07-19 21:15:06,452 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,452 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,452 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4579bff74bc250630a8bf94138cfbe06: 2023-07-19 21:15:06,459 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:06,481 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801306463"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801306463"}]},"ts":"1689801306463"} 2023-07-19 21:15:06,522 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:06,524 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:06,530 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801306524"}]},"ts":"1689801306524"} 2023-07-19 21:15:06,536 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 21:15:06,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:06,539 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 21:15:06,542 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:06,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:06,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:06,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:06,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:06,544 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:06,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4579bff74bc250630a8bf94138cfbe06, ASSIGN}] 2023-07-19 21:15:06,547 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:06,549 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4579bff74bc250630a8bf94138cfbe06, ASSIGN 2023-07-19 21:15:06,551 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4579bff74bc250630a8bf94138cfbe06, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:06,553 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,554 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 empty. 2023-07-19 21:15:06,554 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,554 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 21:15:06,585 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:06,592 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1934a6e0c77f024959d2c8636ae430b9, NAME => 'hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 1934a6e0c77f024959d2c8636ae430b9, disabling compactions & flushes 2023-07-19 21:15:06,629 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. after waiting 0 ms 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,629 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,629 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:06,636 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:06,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801306638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801306638"}]},"ts":"1689801306638"} 2023-07-19 21:15:06,652 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:06,656 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:06,656 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801306656"}]},"ts":"1689801306656"} 2023-07-19 21:15:06,666 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 21:15:06,675 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:06,675 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:06,675 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:06,675 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:06,675 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:06,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, ASSIGN}] 2023-07-19 21:15:06,692 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, ASSIGN 2023-07-19 21:15:06,697 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:06,698 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 21:15:06,700 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4579bff74bc250630a8bf94138cfbe06, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:06,700 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:06,700 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801306699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801306699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801306699"}]},"ts":"1689801306699"} 2023-07-19 21:15:06,700 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801306700"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801306700"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801306700"}]},"ts":"1689801306700"} 2023-07-19 21:15:06,706 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 4579bff74bc250630a8bf94138cfbe06, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:06,709 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:06,863 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:06,864 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:06,864 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:06,864 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:06,868 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43688, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:06,868 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59006, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:06,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,875 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4579bff74bc250630a8bf94138cfbe06, NAME => 'hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:06,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1934a6e0c77f024959d2c8636ae430b9, NAME => 'hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. service=MultiRowMutationService 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,877 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 21:15:06,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:06,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,880 INFO [StoreOpener-4579bff74bc250630a8bf94138cfbe06-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,882 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,883 DEBUG [StoreOpener-4579bff74bc250630a8bf94138cfbe06-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/info 2023-07-19 21:15:06,884 DEBUG [StoreOpener-4579bff74bc250630a8bf94138cfbe06-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/info 2023-07-19 21:15:06,884 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:06,884 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:06,884 INFO [StoreOpener-4579bff74bc250630a8bf94138cfbe06-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4579bff74bc250630a8bf94138cfbe06 columnFamilyName info 2023-07-19 21:15:06,884 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1934a6e0c77f024959d2c8636ae430b9 columnFamilyName m 2023-07-19 21:15:06,885 INFO [StoreOpener-4579bff74bc250630a8bf94138cfbe06-1] regionserver.HStore(310): Store=4579bff74bc250630a8bf94138cfbe06/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:06,886 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(310): Store=1934a6e0c77f024959d2c8636ae430b9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:06,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,890 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,892 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:06,898 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:06,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:06,900 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4579bff74bc250630a8bf94138cfbe06; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10216794240, jitterRate=-0.04848688840866089}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:06,900 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4579bff74bc250630a8bf94138cfbe06: 2023-07-19 21:15:06,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:06,904 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06., pid=8, masterSystemTime=1689801306863 2023-07-19 21:15:06,906 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1934a6e0c77f024959d2c8636ae430b9; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@412048c0, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:06,906 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:06,908 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9., pid=9, masterSystemTime=1689801306864 2023-07-19 21:15:06,910 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,910 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:06,912 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4579bff74bc250630a8bf94138cfbe06, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:06,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,913 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801306911"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801306911"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801306911"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801306911"}]},"ts":"1689801306911"} 2023-07-19 21:15:06,913 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:06,914 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:06,914 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801306914"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801306914"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801306914"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801306914"}]},"ts":"1689801306914"} 2023-07-19 21:15:06,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-19 21:15:06,922 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 4579bff74bc250630a8bf94138cfbe06, server=jenkins-hbase4.apache.org,45225,1689801303640 in 211 msec 2023-07-19 21:15:06,931 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 21:15:06,932 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,33985,1689801303414 in 210 msec 2023-07-19 21:15:06,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-19 21:15:06,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4579bff74bc250630a8bf94138cfbe06, ASSIGN in 376 msec 2023-07-19 21:15:06,939 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:06,939 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801306939"}]},"ts":"1689801306939"} 2023-07-19 21:15:06,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-19 21:15:06,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, ASSIGN in 257 msec 2023-07-19 21:15:06,941 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:06,942 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801306942"}]},"ts":"1689801306942"} 2023-07-19 21:15:06,944 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 21:15:06,945 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 21:15:06,951 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:06,952 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:06,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 628 msec 2023-07-19 21:15:06,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 416 msec 2023-07-19 21:15:07,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 21:15:07,038 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:07,038 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:07,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:07,065 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:07,067 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:07,071 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59018, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:07,073 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 21:15:07,074 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 21:15:07,087 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 21:15:07,114 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:07,123 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 51 msec 2023-07-19 21:15:07,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 21:15:07,147 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:07,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 21 msec 2023-07-19 21:15:07,163 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:07,163 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:07,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:07,179 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 21:15:07,180 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 21:15:07,184 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 21:15:07,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.161sec 2023-07-19 21:15:07,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 21:15:07,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 21:15:07,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 21:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36267,1689801301454-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 21:15:07,192 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36267,1689801301454-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 21:15:07,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 21:15:07,282 DEBUG [Listener at localhost/39507] zookeeper.ReadOnlyZKClient(139): Connect 0x65b017d0 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:07,288 DEBUG [Listener at localhost/39507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d5eab83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:07,303 DEBUG [hconnection-0x2231fec8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:07,315 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:07,327 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:07,328 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:07,338 DEBUG [Listener at localhost/39507] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 21:15:07,341 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33664, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 21:15:07,354 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 21:15:07,355 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:07,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 21:15:07,361 DEBUG [Listener at localhost/39507] zookeeper.ReadOnlyZKClient(139): Connect 0x1792a4f4 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:07,367 DEBUG [Listener at localhost/39507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22934466, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:07,367 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:07,371 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:07,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017f701e77000a connected 2023-07-19 21:15:07,407 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=681, MaxFileDescriptor=60000, SystemLoadAverage=340, ProcessCount=176, AvailableMemoryMB=2873 2023-07-19 21:15:07,409 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-19 21:15:07,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:07,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:07,475 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:07,488 INFO [Listener at localhost/39507] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:07,492 INFO [Listener at localhost/39507] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43325 2023-07-19 21:15:07,493 INFO [Listener at localhost/39507] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:07,494 DEBUG [Listener at localhost/39507] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:07,496 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:07,500 INFO [Listener at localhost/39507] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:07,504 INFO [Listener at localhost/39507] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43325 connecting to ZooKeeper ensemble=127.0.0.1:58627 2023-07-19 21:15:07,507 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:433250x0, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:07,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43325-0x1017f701e77000b connected 2023-07-19 21:15:07,509 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:07,510 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 21:15:07,512 DEBUG [Listener at localhost/39507] zookeeper.ZKUtil(164): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:07,513 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43325 2023-07-19 21:15:07,515 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43325 2023-07-19 21:15:07,515 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43325 2023-07-19 21:15:07,516 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43325 2023-07-19 21:15:07,516 DEBUG [Listener at localhost/39507] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43325 2023-07-19 21:15:07,518 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:07,518 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:07,518 INFO [Listener at localhost/39507] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:07,519 INFO [Listener at localhost/39507] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:07,519 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:07,519 INFO [Listener at localhost/39507] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:07,519 INFO [Listener at localhost/39507] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:07,519 INFO [Listener at localhost/39507] http.HttpServer(1146): Jetty bound to port 37915 2023-07-19 21:15:07,520 INFO [Listener at localhost/39507] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:07,523 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:07,524 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6508fc97{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:07,524 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:07,524 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38634af7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:07,650 INFO [Listener at localhost/39507] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:07,651 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:07,651 INFO [Listener at localhost/39507] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:07,652 INFO [Listener at localhost/39507] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:07,654 INFO [Listener at localhost/39507] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:07,656 INFO [Listener at localhost/39507] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3cbddc65{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/java.io.tmpdir/jetty-0_0_0_0-37915-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8891928305336306603/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:07,658 INFO [Listener at localhost/39507] server.AbstractConnector(333): Started ServerConnector@3a94b5f4{HTTP/1.1, (http/1.1)}{0.0.0.0:37915} 2023-07-19 21:15:07,658 INFO [Listener at localhost/39507] server.Server(415): Started @11821ms 2023-07-19 21:15:07,661 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(951): ClusterId : 04d140fc-999b-49d1-9db4-bb9fac47eabb 2023-07-19 21:15:07,663 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:07,666 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:07,666 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:07,669 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:07,670 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ReadOnlyZKClient(139): Connect 0x2376f5c7 to 127.0.0.1:58627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:07,680 DEBUG [RS:3;jenkins-hbase4:43325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db10fd9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:07,680 DEBUG [RS:3;jenkins-hbase4:43325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@298e9eb7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:07,690 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43325 2023-07-19 21:15:07,690 INFO [RS:3;jenkins-hbase4:43325] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:07,691 INFO [RS:3;jenkins-hbase4:43325] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:07,691 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:07,691 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36267,1689801301454 with isa=jenkins-hbase4.apache.org/172.31.14.131:43325, startcode=1689801307487 2023-07-19 21:15:07,691 DEBUG [RS:3;jenkins-hbase4:43325] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:07,698 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54287, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:07,699 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36267] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,699 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:07,699 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769 2023-07-19 21:15:07,699 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40615 2023-07-19 21:15:07,699 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39695 2023-07-19 21:15:07,706 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:07,706 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:07,706 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:07,706 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:07,706 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:07,707 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43325,1689801307487] 2023-07-19 21:15:07,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,707 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:07,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,707 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,708 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:07,707 WARN [RS:3;jenkins-hbase4:43325] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:07,712 INFO [RS:3;jenkins-hbase4:43325] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:07,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:07,712 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,713 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36267,1689801301454] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 21:15:07,713 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:07,714 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:07,713 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:07,715 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,715 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:07,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,722 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,722 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:07,722 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:07,723 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ZKUtil(162): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,724 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:07,724 INFO [RS:3;jenkins-hbase4:43325] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:07,728 INFO [RS:3;jenkins-hbase4:43325] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:07,729 INFO [RS:3;jenkins-hbase4:43325] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:07,729 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,729 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:07,731 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,731 DEBUG [RS:3;jenkins-hbase4:43325] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:07,737 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,738 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,738 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,749 INFO [RS:3;jenkins-hbase4:43325] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:07,749 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43325,1689801307487-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:07,760 INFO [RS:3;jenkins-hbase4:43325] regionserver.Replication(203): jenkins-hbase4.apache.org,43325,1689801307487 started 2023-07-19 21:15:07,760 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43325,1689801307487, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43325, sessionid=0x1017f701e77000b 2023-07-19 21:15:07,761 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:07,761 DEBUG [RS:3;jenkins-hbase4:43325] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,761 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43325,1689801307487' 2023-07-19 21:15:07,761 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:07,761 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:07,762 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:07,762 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:07,762 DEBUG [RS:3;jenkins-hbase4:43325] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:07,762 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43325,1689801307487' 2023-07-19 21:15:07,762 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:07,763 DEBUG [RS:3;jenkins-hbase4:43325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:07,763 DEBUG [RS:3;jenkins-hbase4:43325] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:07,763 INFO [RS:3;jenkins-hbase4:43325] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:07,763 INFO [RS:3;jenkins-hbase4:43325] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:07,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:07,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:07,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:07,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:07,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:07,779 DEBUG [hconnection-0x63197ba-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:07,785 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50022, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:07,789 DEBUG [hconnection-0x63197ba-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:07,792 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:07,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:07,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:07,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:07,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:07,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33664 deadline: 1689802507806, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:07,809 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:07,811 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:07,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:07,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:07,813 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:07,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:07,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:07,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:07,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:07,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:07,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:07,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:07,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:07,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:07,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:07,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:07,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:07,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:07,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 21:15:07,861 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 21:15:07,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(238): Moving server region 1934a6e0c77f024959d2c8636ae430b9, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:07,862 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33539,1689801303815, state=CLOSING 2023-07-19 21:15:07,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE 2023-07-19 21:15:07,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-19 21:15:07,865 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE 2023-07-19 21:15:07,865 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:07,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:07,868 INFO [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43325%2C1689801307487, suffix=, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,43325,1689801307487, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:07,872 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:07,873 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,873 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801307873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801307873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801307873"}]},"ts":"1689801307873"} 2023-07-19 21:15:07,878 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:07,889 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:07,903 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:07,916 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:07,935 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:07,943 INFO [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,43325,1689801307487/jenkins-hbase4.apache.org%2C43325%2C1689801307487.1689801307869 2023-07-19 21:15:07,946 DEBUG [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK]] 2023-07-19 21:15:08,032 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-19 21:15:08,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:08,034 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:08,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:08,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:08,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:08,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-19 21:15:08,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/8c9ad1f4eea141428139ceb6f35d4f6d 2023-07-19 21:15:08,245 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/aea4f741c841413985d15932099557f9 2023-07-19 21:15:08,256 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/8c9ad1f4eea141428139ceb6f35d4f6d as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/8c9ad1f4eea141428139ceb6f35d4f6d 2023-07-19 21:15:08,267 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/8c9ad1f4eea141428139ceb6f35d4f6d, entries=21, sequenceid=15, filesize=7.1 K 2023-07-19 21:15:08,271 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/aea4f741c841413985d15932099557f9 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/aea4f741c841413985d15932099557f9 2023-07-19 21:15:08,282 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/aea4f741c841413985d15932099557f9, entries=4, sequenceid=15, filesize=4.8 K 2023-07-19 21:15:08,284 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 249ms, sequenceid=15, compaction requested=false 2023-07-19 21:15:08,286 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 21:15:08,298 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-19 21:15:08,299 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:08,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:08,300 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:08,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43325,1689801307487 record at close sequenceid=15 2023-07-19 21:15:08,303 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-19 21:15:08,304 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-19 21:15:08,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-19 21:15:08,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33539,1689801303815 in 438 msec 2023-07-19 21:15:08,308 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:08,458 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:08,458 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43325,1689801307487, state=OPENING 2023-07-19 21:15:08,461 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:08,461 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:08,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:08,617 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:08,617 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:08,620 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:08,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 21:15:08,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:08,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43325%2C1689801307487.meta, suffix=.meta, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,43325,1689801307487, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:08,654 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:08,656 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:08,656 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:08,662 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,43325,1689801307487/jenkins-hbase4.apache.org%2C43325%2C1689801307487.meta.1689801308630.meta 2023-07-19 21:15:08,662 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK], DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK]] 2023-07-19 21:15:08,662 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:08,662 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:08,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 21:15:08,663 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 21:15:08,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 21:15:08,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:08,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 21:15:08,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 21:15:08,665 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:08,667 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:08,667 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:08,668 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:08,684 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/8c9ad1f4eea141428139ceb6f35d4f6d 2023-07-19 21:15:08,685 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:08,685 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:08,687 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:08,687 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:08,688 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:08,689 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:08,689 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:08,690 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:08,690 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:08,691 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:08,712 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/aea4f741c841413985d15932099557f9 2023-07-19 21:15:08,712 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:08,714 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:08,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:08,721 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:08,724 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:08,725 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10473579520, jitterRate=-0.024571895599365234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:08,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:08,727 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689801308617 2023-07-19 21:15:08,732 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 21:15:08,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 21:15:08,734 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43325,1689801307487, state=OPEN 2023-07-19 21:15:08,735 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:08,735 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:08,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-19 21:15:08,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43325,1689801307487 in 274 msec 2023-07-19 21:15:08,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 881 msec 2023-07-19 21:15:08,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-19 21:15:08,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:08,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1934a6e0c77f024959d2c8636ae430b9, disabling compactions & flushes 2023-07-19 21:15:08,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:08,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:08,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. after waiting 0 ms 2023-07-19 21:15:08,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:08,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1934a6e0c77f024959d2c8636ae430b9 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-19 21:15:08,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/6998149e1ee246f6b75ccb6dbcfc034a 2023-07-19 21:15:08,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/6998149e1ee246f6b75ccb6dbcfc034a as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/6998149e1ee246f6b75ccb6dbcfc034a 2023-07-19 21:15:08,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/6998149e1ee246f6b75ccb6dbcfc034a, entries=3, sequenceid=9, filesize=5.2 K 2023-07-19 21:15:08,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for 1934a6e0c77f024959d2c8636ae430b9 in 90ms, sequenceid=9, compaction requested=false 2023-07-19 21:15:08,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 21:15:08,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 21:15:08,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:08,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:08,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:08,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1934a6e0c77f024959d2c8636ae430b9 move to jenkins-hbase4.apache.org,43325,1689801307487 record at close sequenceid=9 2023-07-19 21:15:08,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:08,996 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=CLOSED 2023-07-19 21:15:08,996 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801308996"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801308996"}]},"ts":"1689801308996"} 2023-07-19 21:15:08,997 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33539] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:50000 deadline: 1689801368997, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43325 startCode=1689801307487. As of locationSeqNum=15. 2023-07-19 21:15:09,099 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:09,100 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34966, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:09,117 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-19 21:15:09,117 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,33985,1689801303414 in 1.2290 sec 2023-07-19 21:15:09,119 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:09,270 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:09,270 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:09,271 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801309270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801309270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801309270"}]},"ts":"1689801309270"} 2023-07-19 21:15:09,274 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:09,432 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1934a6e0c77f024959d2c8636ae430b9, NAME => 'hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. service=MultiRowMutationService 2023-07-19 21:15:09,433 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,435 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,437 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:09,437 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:09,438 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1934a6e0c77f024959d2c8636ae430b9 columnFamilyName m 2023-07-19 21:15:09,446 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/6998149e1ee246f6b75ccb6dbcfc034a 2023-07-19 21:15:09,447 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(310): Store=1934a6e0c77f024959d2c8636ae430b9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:09,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,456 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:09,458 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1934a6e0c77f024959d2c8636ae430b9; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2bd598c5, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:09,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:09,459 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9., pid=17, masterSystemTime=1689801309427 2023-07-19 21:15:09,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:09,462 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:09,462 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:09,463 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801309462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801309462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801309462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801309462"}]},"ts":"1689801309462"} 2023-07-19 21:15:09,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-19 21:15:09,469 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,43325,1689801307487 in 191 msec 2023-07-19 21:15:09,471 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE in 1.6070 sec 2023-07-19 21:15:09,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=13 2023-07-19 21:15:09,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to default 2023-07-19 21:15:09,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:09,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:09,866 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33985] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:59020 deadline: 1689801369866, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43325 startCode=1689801307487. As of locationSeqNum=9. 2023-07-19 21:15:09,970 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33539] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50022 deadline: 1689801369970, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43325 startCode=1689801307487. As of locationSeqNum=15. 2023-07-19 21:15:10,072 DEBUG [hconnection-0x63197ba-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:10,076 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34980, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:10,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:10,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:10,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:10,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:10,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:10,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:10,118 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:10,121 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33985] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:59018 deadline: 1689801370121, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43325 startCode=1689801307487. As of locationSeqNum=9. 2023-07-19 21:15:10,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-19 21:15:10,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:10,229 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:10,229 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:10,230 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:10,230 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:10,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:10,238 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:10,246 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:10,247 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:10,247 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:10,247 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:10,247 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:10,248 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d empty. 2023-07-19 21:15:10,248 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c empty. 2023-07-19 21:15:10,248 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 empty. 2023-07-19 21:15:10,248 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb empty. 2023-07-19 21:15:10,248 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 empty. 2023-07-19 21:15:10,249 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:10,249 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:10,249 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:10,249 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:10,252 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:10,252 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 21:15:10,286 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:10,288 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => bd947a3497970180a8acdd0a7f3e77c5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:10,288 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5849ff41b210a46669ba8672fb54633d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:10,288 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => dd83ba430a016c93fa7b8303c58e823c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:10,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:10,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 5849ff41b210a46669ba8672fb54633d, disabling compactions & flushes 2023-07-19 21:15:10,351 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:10,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:10,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. after waiting 0 ms 2023-07-19 21:15:10,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:10,351 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:10,351 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 5849ff41b210a46669ba8672fb54633d: 2023-07-19 21:15:10,352 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => aea657560b0d8725ec09f1d1d2aa80f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:10,362 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:10,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing dd83ba430a016c93fa7b8303c58e823c, disabling compactions & flushes 2023-07-19 21:15:10,364 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:10,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:10,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. after waiting 0 ms 2023-07-19 21:15:10,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:10,364 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:10,364 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for dd83ba430a016c93fa7b8303c58e823c: 2023-07-19 21:15:10,365 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f890784cd37bd7ba7c0af2043a25afcb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:10,387 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:10,388 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing aea657560b0d8725ec09f1d1d2aa80f7, disabling compactions & flushes 2023-07-19 21:15:10,389 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:10,389 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:10,389 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. after waiting 0 ms 2023-07-19 21:15:10,389 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:10,389 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:10,389 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for aea657560b0d8725ec09f1d1d2aa80f7: 2023-07-19 21:15:10,403 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:10,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f890784cd37bd7ba7c0af2043a25afcb, disabling compactions & flushes 2023-07-19 21:15:10,404 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:10,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:10,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. after waiting 0 ms 2023-07-19 21:15:10,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:10,404 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:10,404 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f890784cd37bd7ba7c0af2043a25afcb: 2023-07-19 21:15:10,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:10,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing bd947a3497970180a8acdd0a7f3e77c5, disabling compactions & flushes 2023-07-19 21:15:10,753 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. after waiting 0 ms 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:10,753 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:10,753 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for bd947a3497970180a8acdd0a7f3e77c5: 2023-07-19 21:15:10,757 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:10,759 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801310759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801310759"}]},"ts":"1689801310759"} 2023-07-19 21:15:10,759 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801310759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801310759"}]},"ts":"1689801310759"} 2023-07-19 21:15:10,759 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801310759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801310759"}]},"ts":"1689801310759"} 2023-07-19 21:15:10,760 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801310759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801310759"}]},"ts":"1689801310759"} 2023-07-19 21:15:10,760 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801310759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801310759"}]},"ts":"1689801310759"} 2023-07-19 21:15:10,855 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 21:15:10,857 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:10,857 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801310857"}]},"ts":"1689801310857"} 2023-07-19 21:15:10,861 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 21:15:10,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:10,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:10,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:10,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:10,869 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, ASSIGN}] 2023-07-19 21:15:10,872 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, ASSIGN 2023-07-19 21:15:10,873 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, ASSIGN 2023-07-19 21:15:10,874 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, ASSIGN 2023-07-19 21:15:10,874 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, ASSIGN 2023-07-19 21:15:10,876 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:10,876 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, ASSIGN 2023-07-19 21:15:10,877 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:10,877 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:10,877 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:10,879 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:11,031 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 21:15:11,035 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,035 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:11,035 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:11,035 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,035 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,035 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801311035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801311035"}]},"ts":"1689801311035"} 2023-07-19 21:15:11,035 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801311035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801311035"}]},"ts":"1689801311035"} 2023-07-19 21:15:11,036 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801311035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801311035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801311035"}]},"ts":"1689801311035"} 2023-07-19 21:15:11,036 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801311035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801311035"}]},"ts":"1689801311035"} 2023-07-19 21:15:11,036 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801311035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801311035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801311035"}]},"ts":"1689801311035"} 2023-07-19 21:15:11,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=22, state=RUNNABLE; OpenRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:11,041 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=21, state=RUNNABLE; OpenRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:11,045 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; OpenRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:11,049 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=20, state=RUNNABLE; OpenRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:11,049 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=19, state=RUNNABLE; OpenRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:11,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:11,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:11,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5849ff41b210a46669ba8672fb54633d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 21:15:11,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f890784cd37bd7ba7c0af2043a25afcb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,216 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,216 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,220 DEBUG [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/f 2023-07-19 21:15:11,220 DEBUG [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/f 2023-07-19 21:15:11,220 DEBUG [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/f 2023-07-19 21:15:11,220 DEBUG [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/f 2023-07-19 21:15:11,221 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5849ff41b210a46669ba8672fb54633d columnFamilyName f 2023-07-19 21:15:11,221 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f890784cd37bd7ba7c0af2043a25afcb columnFamilyName f 2023-07-19 21:15:11,221 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] regionserver.HStore(310): Store=f890784cd37bd7ba7c0af2043a25afcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:11,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,223 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] regionserver.HStore(310): Store=5849ff41b210a46669ba8672fb54633d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:11,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:11,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:11,241 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f890784cd37bd7ba7c0af2043a25afcb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9894305280, jitterRate=-0.0785210132598877}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:11,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f890784cd37bd7ba7c0af2043a25afcb: 2023-07-19 21:15:11,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:11,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:11,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb., pid=26, masterSystemTime=1689801311204 2023-07-19 21:15:11,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:11,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:11,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:11,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd83ba430a016c93fa7b8303c58e823c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 21:15:11,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:11,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,247 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:11,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:11,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,247 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801311247"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801311247"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801311247"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801311247"}]},"ts":"1689801311247"} 2023-07-19 21:15:11,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5849ff41b210a46669ba8672fb54633d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11376003520, jitterRate=0.059472888708114624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:11,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5849ff41b210a46669ba8672fb54633d: 2023-07-19 21:15:11,249 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d., pid=27, masterSystemTime=1689801311202 2023-07-19 21:15:11,253 DEBUG [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/f 2023-07-19 21:15:11,253 DEBUG [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/f 2023-07-19 21:15:11,255 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd83ba430a016c93fa7b8303c58e823c columnFamilyName f 2023-07-19 21:15:11,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-19 21:15:11,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; OpenRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,43325,1689801307487 in 206 msec 2023-07-19 21:15:11,256 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] regionserver.HStore(310): Store=dd83ba430a016c93fa7b8303c58e823c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:11,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:11,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:11,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:11,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aea657560b0d8725ec09f1d1d2aa80f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 21:15:11,259 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,259 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311258"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801311258"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801311258"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801311258"}]},"ts":"1689801311258"} 2023-07-19 21:15:11,260 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, ASSIGN in 388 msec 2023-07-19 21:15:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=20 2023-07-19 21:15:11,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=20, state=SUCCESS; OpenRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,45225,1689801303640 in 219 msec 2023-07-19 21:15:11,273 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,275 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, ASSIGN in 403 msec 2023-07-19 21:15:11,283 DEBUG [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/f 2023-07-19 21:15:11,283 DEBUG [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/f 2023-07-19 21:15:11,286 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aea657560b0d8725ec09f1d1d2aa80f7 columnFamilyName f 2023-07-19 21:15:11,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:11,288 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] regionserver.HStore(310): Store=aea657560b0d8725ec09f1d1d2aa80f7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:11,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:11,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:11,315 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd83ba430a016c93fa7b8303c58e823c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10512018240, jitterRate=-0.020992010831832886}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:11,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd83ba430a016c93fa7b8303c58e823c: 2023-07-19 21:15:11,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:11,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c., pid=28, masterSystemTime=1689801311204 2023-07-19 21:15:11,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aea657560b0d8725ec09f1d1d2aa80f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10461797440, jitterRate=-0.025669187307357788}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:11,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aea657560b0d8725ec09f1d1d2aa80f7: 2023-07-19 21:15:11,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7., pid=24, masterSystemTime=1689801311202 2023-07-19 21:15:11,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:11,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:11,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:11,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:11,326 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:11,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bd947a3497970180a8acdd0a7f3e77c5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 21:15:11,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:11,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,327 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:11,328 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801311325"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801311325"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801311325"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801311325"}]},"ts":"1689801311325"} 2023-07-19 21:15:11,328 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,328 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311328"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801311328"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801311328"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801311328"}]},"ts":"1689801311328"} 2023-07-19 21:15:11,331 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=19 2023-07-19 21:15:11,347 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=19, state=SUCCESS; OpenRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,43325,1689801307487 in 284 msec 2023-07-19 21:15:11,346 DEBUG [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/f 2023-07-19 21:15:11,347 DEBUG [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/f 2023-07-19 21:15:11,348 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd947a3497970180a8acdd0a7f3e77c5 columnFamilyName f 2023-07-19 21:15:11,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=22 2023-07-19 21:15:11,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=22, state=SUCCESS; OpenRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,45225,1689801303640 in 296 msec 2023-07-19 21:15:11,350 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, ASSIGN in 477 msec 2023-07-19 21:15:11,351 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, ASSIGN in 479 msec 2023-07-19 21:15:11,352 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] regionserver.HStore(310): Store=bd947a3497970180a8acdd0a7f3e77c5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:11,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:11,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:11,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bd947a3497970180a8acdd0a7f3e77c5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9706735520, jitterRate=-0.09598980844020844}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:11,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bd947a3497970180a8acdd0a7f3e77c5: 2023-07-19 21:15:11,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5., pid=25, masterSystemTime=1689801311202 2023-07-19 21:15:11,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:11,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:11,372 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:11,372 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801311371"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801311371"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801311371"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801311371"}]},"ts":"1689801311371"} 2023-07-19 21:15:11,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-19 21:15:11,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; OpenRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,45225,1689801303640 in 333 msec 2023-07-19 21:15:11,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=18 2023-07-19 21:15:11,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, ASSIGN in 508 msec 2023-07-19 21:15:11,382 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:11,383 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801311383"}]},"ts":"1689801311383"} 2023-07-19 21:15:11,385 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 21:15:11,388 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:11,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.2740 sec 2023-07-19 21:15:11,707 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 21:15:11,798 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 21:15:11,798 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-19 21:15:11,799 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:11,799 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-19 21:15:11,799 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 21:15:11,799 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-19 21:15:11,800 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 21:15:11,801 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-19 21:15:12,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-19 21:15:12,244 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-19 21:15:12,244 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-19 21:15:12,245 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:12,246 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33539] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:50020 deadline: 1689801372246, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43325 startCode=1689801307487. As of locationSeqNum=15. 2023-07-19 21:15:12,349 DEBUG [hconnection-0x2231fec8-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:12,356 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50036, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:12,367 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-19 21:15:12,368 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:12,368 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-19 21:15:12,369 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:12,374 DEBUG [Listener at localhost/39507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:12,376 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43088, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:12,379 DEBUG [Listener at localhost/39507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:12,380 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:12,381 DEBUG [Listener at localhost/39507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:12,383 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50048, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:12,385 DEBUG [Listener at localhost/39507] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:12,387 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44782, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:12,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:12,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:12,399 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:12,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:12,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:12,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region dd83ba430a016c93fa7b8303c58e823c to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:12,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:12,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:12,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:12,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:12,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, REOPEN/MOVE 2023-07-19 21:15:12,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 5849ff41b210a46669ba8672fb54633d to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,417 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, REOPEN/MOVE 2023-07-19 21:15:12,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:12,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:12,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:12,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:12,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:12,419 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:12,419 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312419"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312419"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312419"}]},"ts":"1689801312419"} 2023-07-19 21:15:12,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, REOPEN/MOVE 2023-07-19 21:15:12,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region bd947a3497970180a8acdd0a7f3e77c5 to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,421 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, REOPEN/MOVE 2023-07-19 21:15:12,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:12,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:12,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:12,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:12,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:12,422 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:12,422 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312422"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312422"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312422"}]},"ts":"1689801312422"} 2023-07-19 21:15:12,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:12,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, REOPEN/MOVE 2023-07-19 21:15:12,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region aea657560b0d8725ec09f1d1d2aa80f7 to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,425 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=30, state=RUNNABLE; CloseRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:12,425 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, REOPEN/MOVE 2023-07-19 21:15:12,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:12,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:12,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:12,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:12,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:12,432 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:12,432 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312432"}]},"ts":"1689801312432"} 2023-07-19 21:15:12,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, REOPEN/MOVE 2023-07-19 21:15:12,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region f890784cd37bd7ba7c0af2043a25afcb to RSGroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:12,434 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, REOPEN/MOVE 2023-07-19 21:15:12,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:12,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:12,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:12,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:12,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:12,439 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:12,439 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312439"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312439"}]},"ts":"1689801312439"} 2023-07-19 21:15:12,439 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:12,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, REOPEN/MOVE 2023-07-19 21:15:12,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_268583540, current retry=0 2023-07-19 21:15:12,442 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, REOPEN/MOVE 2023-07-19 21:15:12,443 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=34, state=RUNNABLE; CloseRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:12,445 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:12,445 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312445"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312445"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312445"}]},"ts":"1689801312445"} 2023-07-19 21:15:12,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:12,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd83ba430a016c93fa7b8303c58e823c, disabling compactions & flushes 2023-07-19 21:15:12,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. after waiting 0 ms 2023-07-19 21:15:12,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:12,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:12,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5849ff41b210a46669ba8672fb54633d, disabling compactions & flushes 2023-07-19 21:15:12,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:12,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:12,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd83ba430a016c93fa7b8303c58e823c: 2023-07-19 21:15:12,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. after waiting 0 ms 2023-07-19 21:15:12,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dd83ba430a016c93fa7b8303c58e823c move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:12,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:12,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f890784cd37bd7ba7c0af2043a25afcb, disabling compactions & flushes 2023-07-19 21:15:12,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. after waiting 0 ms 2023-07-19 21:15:12,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,597 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=CLOSED 2023-07-19 21:15:12,597 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312597"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801312597"}]},"ts":"1689801312597"} 2023-07-19 21:15:12,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:12,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:12,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5849ff41b210a46669ba8672fb54633d: 2023-07-19 21:15:12,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5849ff41b210a46669ba8672fb54633d move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:12,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:12,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:12,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bd947a3497970180a8acdd0a7f3e77c5, disabling compactions & flushes 2023-07-19 21:15:12,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f890784cd37bd7ba7c0af2043a25afcb: 2023-07-19 21:15:12,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:12,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f890784cd37bd7ba7c0af2043a25afcb move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:12,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:12,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. after waiting 0 ms 2023-07-19 21:15:12,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-19 21:15:12,609 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=CLOSED 2023-07-19 21:15:12,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,43325,1689801307487 in 177 msec 2023-07-19 21:15:12,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:12,609 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312609"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801312609"}]},"ts":"1689801312609"} 2023-07-19 21:15:12,610 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:12,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,613 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=CLOSED 2023-07-19 21:15:12,614 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801312613"}]},"ts":"1689801312613"} 2023-07-19 21:15:12,616 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=30 2023-07-19 21:15:12,616 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=30, state=SUCCESS; CloseRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,45225,1689801303640 in 187 msec 2023-07-19 21:15:12,617 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:12,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:12,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:12,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bd947a3497970180a8acdd0a7f3e77c5: 2023-07-19 21:15:12,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bd947a3497970180a8acdd0a7f3e77c5 move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:12,626 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-19 21:15:12,626 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,43325,1689801307487 in 167 msec 2023-07-19 21:15:12,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,627 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:12,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aea657560b0d8725ec09f1d1d2aa80f7, disabling compactions & flushes 2023-07-19 21:15:12,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. after waiting 0 ms 2023-07-19 21:15:12,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,629 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=CLOSED 2023-07-19 21:15:12,629 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312629"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801312629"}]},"ts":"1689801312629"} 2023-07-19 21:15:12,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-19 21:15:12,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,45225,1689801303640 in 192 msec 2023-07-19 21:15:12,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:12,636 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:12,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aea657560b0d8725ec09f1d1d2aa80f7: 2023-07-19 21:15:12,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aea657560b0d8725ec09f1d1d2aa80f7 move to jenkins-hbase4.apache.org,33985,1689801303414 record at close sequenceid=2 2023-07-19 21:15:12,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,641 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=CLOSED 2023-07-19 21:15:12,641 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312641"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801312641"}]},"ts":"1689801312641"} 2023-07-19 21:15:12,647 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=34 2023-07-19 21:15:12,647 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=34, state=SUCCESS; CloseRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,45225,1689801303640 in 200 msec 2023-07-19 21:15:12,648 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:12,761 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 21:15:12,761 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:12,761 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,761 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,761 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312761"}]},"ts":"1689801312761"} 2023-07-19 21:15:12,762 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312761"}]},"ts":"1689801312761"} 2023-07-19 21:15:12,761 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,761 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,762 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312761"}]},"ts":"1689801312761"} 2023-07-19 21:15:12,762 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312761"}]},"ts":"1689801312761"} 2023-07-19 21:15:12,762 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801312761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801312761"}]},"ts":"1689801312761"} 2023-07-19 21:15:12,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=29, state=RUNNABLE; OpenRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:12,766 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=34, state=RUNNABLE; OpenRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:12,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:12,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:12,770 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=35, state=RUNNABLE; OpenRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:12,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd83ba430a016c93fa7b8303c58e823c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aea657560b0d8725ec09f1d1d2aa80f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 21:15:12,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:12,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,930 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,930 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,932 DEBUG [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/f 2023-07-19 21:15:12,932 DEBUG [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/f 2023-07-19 21:15:12,932 DEBUG [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/f 2023-07-19 21:15:12,932 DEBUG [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/f 2023-07-19 21:15:12,932 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd83ba430a016c93fa7b8303c58e823c columnFamilyName f 2023-07-19 21:15:12,933 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aea657560b0d8725ec09f1d1d2aa80f7 columnFamilyName f 2023-07-19 21:15:12,933 INFO [StoreOpener-dd83ba430a016c93fa7b8303c58e823c-1] regionserver.HStore(310): Store=dd83ba430a016c93fa7b8303c58e823c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:12,933 INFO [StoreOpener-aea657560b0d8725ec09f1d1d2aa80f7-1] regionserver.HStore(310): Store=aea657560b0d8725ec09f1d1d2aa80f7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:12,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:12,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:12,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aea657560b0d8725ec09f1d1d2aa80f7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12023992800, jitterRate=0.11982159316539764}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:12,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd83ba430a016c93fa7b8303c58e823c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10951250720, jitterRate=0.019914701581001282}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:12,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aea657560b0d8725ec09f1d1d2aa80f7: 2023-07-19 21:15:12,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd83ba430a016c93fa7b8303c58e823c: 2023-07-19 21:15:12,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7., pid=40, masterSystemTime=1689801312919 2023-07-19 21:15:12,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c., pid=39, masterSystemTime=1689801312917 2023-07-19 21:15:12,948 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,949 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:12,949 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801312949"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801312949"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801312949"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801312949"}]},"ts":"1689801312949"} 2023-07-19 21:15:12,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:12,954 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312948"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801312948"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801312948"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801312948"}]},"ts":"1689801312948"} 2023-07-19 21:15:12,956 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=34 2023-07-19 21:15:12,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=34, state=SUCCESS; OpenRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,33985,1689801303414 in 187 msec 2023-07-19 21:15:12,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:12,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f890784cd37bd7ba7c0af2043a25afcb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 21:15:12,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:12,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, REOPEN/MOVE in 528 msec 2023-07-19 21:15:12,961 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=29 2023-07-19 21:15:12,961 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=29, state=SUCCESS; OpenRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,33539,1689801303815 in 194 msec 2023-07-19 21:15:12,963 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,965 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, REOPEN/MOVE in 546 msec 2023-07-19 21:15:12,966 DEBUG [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/f 2023-07-19 21:15:12,966 DEBUG [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/f 2023-07-19 21:15:12,967 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f890784cd37bd7ba7c0af2043a25afcb columnFamilyName f 2023-07-19 21:15:12,967 INFO [StoreOpener-f890784cd37bd7ba7c0af2043a25afcb-1] regionserver.HStore(310): Store=f890784cd37bd7ba7c0af2043a25afcb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:12,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:12,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f890784cd37bd7ba7c0af2043a25afcb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10256122880, jitterRate=-0.04482412338256836}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:12,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f890784cd37bd7ba7c0af2043a25afcb: 2023-07-19 21:15:12,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb., pid=43, masterSystemTime=1689801312917 2023-07-19 21:15:12,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:12,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:12,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bd947a3497970180a8acdd0a7f3e77c5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 21:15:12,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:12,979 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:12,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,980 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801312979"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801312979"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801312979"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801312979"}]},"ts":"1689801312979"} 2023-07-19 21:15:12,987 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,988 DEBUG [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/f 2023-07-19 21:15:12,988 DEBUG [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/f 2023-07-19 21:15:12,989 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=35 2023-07-19 21:15:12,989 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=35, state=SUCCESS; OpenRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,33539,1689801303815 in 212 msec 2023-07-19 21:15:12,989 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd947a3497970180a8acdd0a7f3e77c5 columnFamilyName f 2023-07-19 21:15:12,991 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, REOPEN/MOVE in 554 msec 2023-07-19 21:15:12,991 INFO [StoreOpener-bd947a3497970180a8acdd0a7f3e77c5-1] regionserver.HStore(310): Store=bd947a3497970180a8acdd0a7f3e77c5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:12,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:12,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:13,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bd947a3497970180a8acdd0a7f3e77c5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11495685600, jitterRate=0.07061915099620819}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:13,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bd947a3497970180a8acdd0a7f3e77c5: 2023-07-19 21:15:13,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5., pid=42, masterSystemTime=1689801312917 2023-07-19 21:15:13,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5849ff41b210a46669ba8672fb54633d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 21:15:13,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:13,004 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,005 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313004"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801313004"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801313004"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801313004"}]},"ts":"1689801313004"} 2023-07-19 21:15:13,007 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,008 DEBUG [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/f 2023-07-19 21:15:13,008 DEBUG [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/f 2023-07-19 21:15:13,009 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5849ff41b210a46669ba8672fb54633d columnFamilyName f 2023-07-19 21:15:13,009 INFO [StoreOpener-5849ff41b210a46669ba8672fb54633d-1] regionserver.HStore(310): Store=5849ff41b210a46669ba8672fb54633d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:13,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,013 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-19 21:15:13,013 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,33539,1689801303815 in 239 msec 2023-07-19 21:15:13,016 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, REOPEN/MOVE in 591 msec 2023-07-19 21:15:13,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5849ff41b210a46669ba8672fb54633d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11552767360, jitterRate=0.07593530416488647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:13,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5849ff41b210a46669ba8672fb54633d: 2023-07-19 21:15:13,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d., pid=41, masterSystemTime=1689801312917 2023-07-19 21:15:13,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,022 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,022 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313022"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801313022"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801313022"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801313022"}]},"ts":"1689801313022"} 2023-07-19 21:15:13,027 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-19 21:15:13,027 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,33539,1689801303815 in 258 msec 2023-07-19 21:15:13,030 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, REOPEN/MOVE in 608 msec 2023-07-19 21:15:13,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-19 21:15:13,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_268583540. 2023-07-19 21:15:13,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:13,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:13,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:13,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:13,450 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:13,456 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,473 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801313472"}]},"ts":"1689801313472"} 2023-07-19 21:15:13,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-19 21:15:13,475 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 21:15:13,478 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 21:15:13,482 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, UNASSIGN}] 2023-07-19 21:15:13,485 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, UNASSIGN 2023-07-19 21:15:13,486 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, UNASSIGN 2023-07-19 21:15:13,489 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, UNASSIGN 2023-07-19 21:15:13,489 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, UNASSIGN 2023-07-19 21:15:13,489 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, UNASSIGN 2023-07-19 21:15:13,490 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,490 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801313490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801313490"}]},"ts":"1689801313490"} 2023-07-19 21:15:13,491 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,491 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801313491"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801313491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801313491"}]},"ts":"1689801313491"} 2023-07-19 21:15:13,491 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:13,491 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,491 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313491"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801313491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801313491"}]},"ts":"1689801313491"} 2023-07-19 21:15:13,492 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801313491"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801313491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801313491"}]},"ts":"1689801313491"} 2023-07-19 21:15:13,492 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:13,492 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313492"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801313492"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801313492"}]},"ts":"1689801313492"} 2023-07-19 21:15:13,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=46, state=RUNNABLE; CloseRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:13,501 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=45, state=RUNNABLE; CloseRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:13,503 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=48, state=RUNNABLE; CloseRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:13,505 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:13,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=47, state=RUNNABLE; CloseRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:13,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-19 21:15:13,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:13,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bd947a3497970180a8acdd0a7f3e77c5, disabling compactions & flushes 2023-07-19 21:15:13,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. after waiting 0 ms 2023-07-19 21:15:13,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:13,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aea657560b0d8725ec09f1d1d2aa80f7, disabling compactions & flushes 2023-07-19 21:15:13,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:13,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:13,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. after waiting 0 ms 2023-07-19 21:15:13,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:13,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:13,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5. 2023-07-19 21:15:13,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bd947a3497970180a8acdd0a7f3e77c5: 2023-07-19 21:15:13,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:13,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7. 2023-07-19 21:15:13,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aea657560b0d8725ec09f1d1d2aa80f7: 2023-07-19 21:15:13,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:13,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:13,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f890784cd37bd7ba7c0af2043a25afcb, disabling compactions & flushes 2023-07-19 21:15:13,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:13,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:13,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. after waiting 0 ms 2023-07-19 21:15:13,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:13,666 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=bd947a3497970180a8acdd0a7f3e77c5, regionState=CLOSED 2023-07-19 21:15:13,666 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313666"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801313666"}]},"ts":"1689801313666"} 2023-07-19 21:15:13,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:13,669 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=aea657560b0d8725ec09f1d1d2aa80f7, regionState=CLOSED 2023-07-19 21:15:13,669 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801313669"}]},"ts":"1689801313669"} 2023-07-19 21:15:13,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:13,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb. 2023-07-19 21:15:13,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=47 2023-07-19 21:15:13,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f890784cd37bd7ba7c0af2043a25afcb: 2023-07-19 21:15:13,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=47, state=SUCCESS; CloseRegionProcedure bd947a3497970180a8acdd0a7f3e77c5, server=jenkins-hbase4.apache.org,33539,1689801303815 in 164 msec 2023-07-19 21:15:13,674 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=48 2023-07-19 21:15:13,674 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; CloseRegionProcedure aea657560b0d8725ec09f1d1d2aa80f7, server=jenkins-hbase4.apache.org,33985,1689801303414 in 168 msec 2023-07-19 21:15:13,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd947a3497970180a8acdd0a7f3e77c5, UNASSIGN in 193 msec 2023-07-19 21:15:13,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:13,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:13,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd83ba430a016c93fa7b8303c58e823c, disabling compactions & flushes 2023-07-19 21:15:13,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:13,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:13,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. after waiting 0 ms 2023-07-19 21:15:13,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:13,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea657560b0d8725ec09f1d1d2aa80f7, UNASSIGN in 194 msec 2023-07-19 21:15:13,677 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=f890784cd37bd7ba7c0af2043a25afcb, regionState=CLOSED 2023-07-19 21:15:13,677 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801313677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801313677"}]},"ts":"1689801313677"} 2023-07-19 21:15:13,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-19 21:15:13,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure f890784cd37bd7ba7c0af2043a25afcb, server=jenkins-hbase4.apache.org,33539,1689801303815 in 174 msec 2023-07-19 21:15:13,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:13,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f890784cd37bd7ba7c0af2043a25afcb, UNASSIGN in 216 msec 2023-07-19 21:15:13,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c. 2023-07-19 21:15:13,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd83ba430a016c93fa7b8303c58e823c: 2023-07-19 21:15:13,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:13,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,702 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=dd83ba430a016c93fa7b8303c58e823c, regionState=CLOSED 2023-07-19 21:15:13,703 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801313702"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801313702"}]},"ts":"1689801313702"} 2023-07-19 21:15:13,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5849ff41b210a46669ba8672fb54633d, disabling compactions & flushes 2023-07-19 21:15:13,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. after waiting 0 ms 2023-07-19 21:15:13,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,712 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=45 2023-07-19 21:15:13,712 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; CloseRegionProcedure dd83ba430a016c93fa7b8303c58e823c, server=jenkins-hbase4.apache.org,33539,1689801303815 in 208 msec 2023-07-19 21:15:13,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd83ba430a016c93fa7b8303c58e823c, UNASSIGN in 232 msec 2023-07-19 21:15:13,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:13,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d. 2023-07-19 21:15:13,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5849ff41b210a46669ba8672fb54633d: 2023-07-19 21:15:13,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,730 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5849ff41b210a46669ba8672fb54633d, regionState=CLOSED 2023-07-19 21:15:13,731 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801313730"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801313730"}]},"ts":"1689801313730"} 2023-07-19 21:15:13,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=46 2023-07-19 21:15:13,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=46, state=SUCCESS; CloseRegionProcedure 5849ff41b210a46669ba8672fb54633d, server=jenkins-hbase4.apache.org,33539,1689801303815 in 233 msec 2023-07-19 21:15:13,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=44 2023-07-19 21:15:13,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5849ff41b210a46669ba8672fb54633d, UNASSIGN in 255 msec 2023-07-19 21:15:13,738 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801313738"}]},"ts":"1689801313738"} 2023-07-19 21:15:13,740 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 21:15:13,742 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 21:15:13,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 281 msec 2023-07-19 21:15:13,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-19 21:15:13,778 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-19 21:15:13,780 INFO [Listener at localhost/39507] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:13,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-19 21:15:13,800 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-19 21:15:13,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:13,815 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,815 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:13,815 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:13,815 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:13,815 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:13,819 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits] 2023-07-19 21:15:13,822 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits] 2023-07-19 21:15:13,822 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits] 2023-07-19 21:15:13,823 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits] 2023-07-19 21:15:13,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits] 2023-07-19 21:15:13,832 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d/recovered.edits/7.seqid 2023-07-19 21:15:13,834 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5849ff41b210a46669ba8672fb54633d 2023-07-19 21:15:13,837 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb/recovered.edits/7.seqid 2023-07-19 21:15:13,838 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f890784cd37bd7ba7c0af2043a25afcb 2023-07-19 21:15:13,838 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7/recovered.edits/7.seqid 2023-07-19 21:15:13,839 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5/recovered.edits/7.seqid 2023-07-19 21:15:13,839 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c/recovered.edits/7.seqid 2023-07-19 21:15:13,839 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea657560b0d8725ec09f1d1d2aa80f7 2023-07-19 21:15:13,839 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd947a3497970180a8acdd0a7f3e77c5 2023-07-19 21:15:13,840 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd83ba430a016c93fa7b8303c58e823c 2023-07-19 21:15:13,840 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 21:15:13,868 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 21:15:13,876 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 21:15:13,877 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 21:15:13,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801313878"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801313878"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801313878"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801313878"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801313878"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,883 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 21:15:13,884 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => dd83ba430a016c93fa7b8303c58e823c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801310111.dd83ba430a016c93fa7b8303c58e823c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5849ff41b210a46669ba8672fb54633d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801310111.5849ff41b210a46669ba8672fb54633d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => bd947a3497970180a8acdd0a7f3e77c5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801310111.bd947a3497970180a8acdd0a7f3e77c5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => aea657560b0d8725ec09f1d1d2aa80f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801310111.aea657560b0d8725ec09f1d1d2aa80f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f890784cd37bd7ba7c0af2043a25afcb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801310111.f890784cd37bd7ba7c0af2043a25afcb.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 21:15:13,884 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 21:15:13,884 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801313884"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:13,887 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 21:15:13,896 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:13,897 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:13,897 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:13,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:13,896 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:13,897 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff empty. 2023-07-19 21:15:13,898 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e empty. 2023-07-19 21:15:13,898 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:13,899 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 empty. 2023-07-19 21:15:13,899 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 empty. 2023-07-19 21:15:13,899 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca empty. 2023-07-19 21:15:13,900 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:13,900 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:13,901 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:13,901 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:13,901 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 21:15:13,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:13,973 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:13,975 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 517f80a0dfdf49cd4b8e1711d5c380ff, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:13,976 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1b49422fa76f2c24aeb5225bda3038e7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:13,979 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 53ed9ca02e58dfcd0187f0238b3416ca, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 517f80a0dfdf49cd4b8e1711d5c380ff, disabling compactions & flushes 2023-07-19 21:15:14,023 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. after waiting 0 ms 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,023 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 517f80a0dfdf49cd4b8e1711d5c380ff: 2023-07-19 21:15:14,024 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8d2c125950422b670a99d56a4d7a087, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 53ed9ca02e58dfcd0187f0238b3416ca, disabling compactions & flushes 2023-07-19 21:15:14,035 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. after waiting 0 ms 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,035 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 53ed9ca02e58dfcd0187f0238b3416ca: 2023-07-19 21:15:14,036 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b7ba1d3744258027fc14f38de4dfb39e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1b49422fa76f2c24aeb5225bda3038e7, disabling compactions & flushes 2023-07-19 21:15:14,039 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. after waiting 0 ms 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,039 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,039 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1b49422fa76f2c24aeb5225bda3038e7: 2023-07-19 21:15:14,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c8d2c125950422b670a99d56a4d7a087, disabling compactions & flushes 2023-07-19 21:15:14,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. after waiting 0 ms 2023-07-19 21:15:14,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c8d2c125950422b670a99d56a4d7a087: 2023-07-19 21:15:14,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:14,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b7ba1d3744258027fc14f38de4dfb39e, disabling compactions & flushes 2023-07-19 21:15:14,461 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. after waiting 0 ms 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,461 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,461 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b7ba1d3744258027fc14f38de4dfb39e: 2023-07-19 21:15:14,466 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801314466"}]},"ts":"1689801314466"} 2023-07-19 21:15:14,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801314466"}]},"ts":"1689801314466"} 2023-07-19 21:15:14,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801314466"}]},"ts":"1689801314466"} 2023-07-19 21:15:14,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801314466"}]},"ts":"1689801314466"} 2023-07-19 21:15:14,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314466"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801314466"}]},"ts":"1689801314466"} 2023-07-19 21:15:14,470 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 21:15:14,471 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801314471"}]},"ts":"1689801314471"} 2023-07-19 21:15:14,473 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-19 21:15:14,477 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:14,477 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:14,478 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:14,478 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:14,478 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, ASSIGN}] 2023-07-19 21:15:14,480 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, ASSIGN 2023-07-19 21:15:14,480 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, ASSIGN 2023-07-19 21:15:14,480 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, ASSIGN 2023-07-19 21:15:14,480 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, ASSIGN 2023-07-19 21:15:14,481 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, ASSIGN 2023-07-19 21:15:14,481 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:14,481 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:14,482 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:14,482 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:14,482 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:14,632 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 21:15:14,635 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=b7ba1d3744258027fc14f38de4dfb39e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,635 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=c8d2c125950422b670a99d56a4d7a087, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:14,636 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801314635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801314635"}]},"ts":"1689801314635"} 2023-07-19 21:15:14,635 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=1b49422fa76f2c24aeb5225bda3038e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,635 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=53ed9ca02e58dfcd0187f0238b3416ca, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:14,636 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801314635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801314635"}]},"ts":"1689801314635"} 2023-07-19 21:15:14,636 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801314635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801314635"}]},"ts":"1689801314635"} 2023-07-19 21:15:14,635 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=517f80a0dfdf49cd4b8e1711d5c380ff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,636 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801314635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801314635"}]},"ts":"1689801314635"} 2023-07-19 21:15:14,636 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801314635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801314635"}]},"ts":"1689801314635"} 2023-07-19 21:15:14,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure b7ba1d3744258027fc14f38de4dfb39e, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:14,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure 1b49422fa76f2c24aeb5225bda3038e7, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:14,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; OpenRegionProcedure 53ed9ca02e58dfcd0187f0238b3416ca, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:14,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=56, state=RUNNABLE; OpenRegionProcedure 517f80a0dfdf49cd4b8e1711d5c380ff, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:14,645 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure c8d2c125950422b670a99d56a4d7a087, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:14,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 517f80a0dfdf49cd4b8e1711d5c380ff, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 21:15:14,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,805 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8d2c125950422b670a99d56a4d7a087, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 21:15:14,806 INFO [StoreOpener-517f80a0dfdf49cd4b8e1711d5c380ff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,808 DEBUG [StoreOpener-517f80a0dfdf49cd4b8e1711d5c380ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/f 2023-07-19 21:15:14,809 DEBUG [StoreOpener-517f80a0dfdf49cd4b8e1711d5c380ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/f 2023-07-19 21:15:14,809 INFO [StoreOpener-517f80a0dfdf49cd4b8e1711d5c380ff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 517f80a0dfdf49cd4b8e1711d5c380ff columnFamilyName f 2023-07-19 21:15:14,810 INFO [StoreOpener-c8d2c125950422b670a99d56a4d7a087-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,810 INFO [StoreOpener-517f80a0dfdf49cd4b8e1711d5c380ff-1] regionserver.HStore(310): Store=517f80a0dfdf49cd4b8e1711d5c380ff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:14,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,812 DEBUG [StoreOpener-c8d2c125950422b670a99d56a4d7a087-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/f 2023-07-19 21:15:14,813 DEBUG [StoreOpener-c8d2c125950422b670a99d56a4d7a087-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/f 2023-07-19 21:15:14,813 INFO [StoreOpener-c8d2c125950422b670a99d56a4d7a087-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8d2c125950422b670a99d56a4d7a087 columnFamilyName f 2023-07-19 21:15:14,814 INFO [StoreOpener-c8d2c125950422b670a99d56a4d7a087-1] regionserver.HStore(310): Store=c8d2c125950422b670a99d56a4d7a087/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:14,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:14,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:14,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:14,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 517f80a0dfdf49cd4b8e1711d5c380ff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11634743200, jitterRate=0.08356989920139313}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:14,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 517f80a0dfdf49cd4b8e1711d5c380ff: 2023-07-19 21:15:14,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff., pid=64, masterSystemTime=1689801314798 2023-07-19 21:15:14,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,825 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:14,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7ba1d3744258027fc14f38de4dfb39e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 21:15:14,826 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=517f80a0dfdf49cd4b8e1711d5c380ff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,826 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314826"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801314826"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801314826"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801314826"}]},"ts":"1689801314826"} 2023-07-19 21:15:14,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,831 INFO [StoreOpener-b7ba1d3744258027fc14f38de4dfb39e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,834 DEBUG [StoreOpener-b7ba1d3744258027fc14f38de4dfb39e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/f 2023-07-19 21:15:14,834 DEBUG [StoreOpener-b7ba1d3744258027fc14f38de4dfb39e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/f 2023-07-19 21:15:14,834 INFO [StoreOpener-b7ba1d3744258027fc14f38de4dfb39e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7ba1d3744258027fc14f38de4dfb39e columnFamilyName f 2023-07-19 21:15:14,836 INFO [StoreOpener-b7ba1d3744258027fc14f38de4dfb39e-1] regionserver.HStore(310): Store=b7ba1d3744258027fc14f38de4dfb39e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:14,836 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=56 2023-07-19 21:15:14,836 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=56, state=SUCCESS; OpenRegionProcedure 517f80a0dfdf49cd4b8e1711d5c380ff, server=jenkins-hbase4.apache.org,33985,1689801303414 in 186 msec 2023-07-19 21:15:14,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,838 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, ASSIGN in 358 msec 2023-07-19 21:15:14,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:14,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:14,844 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8d2c125950422b670a99d56a4d7a087; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10949593600, jitterRate=0.0197603702545166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:14,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8d2c125950422b670a99d56a4d7a087: 2023-07-19 21:15:14,845 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087., pid=65, masterSystemTime=1689801314799 2023-07-19 21:15:14,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:14,846 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7ba1d3744258027fc14f38de4dfb39e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10064015680, jitterRate=-0.06271550059318542}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:14,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7ba1d3744258027fc14f38de4dfb39e: 2023-07-19 21:15:14,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e., pid=61, masterSystemTime=1689801314798 2023-07-19 21:15:14,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:14,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 53ed9ca02e58dfcd0187f0238b3416ca, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 21:15:14,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,849 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=c8d2c125950422b670a99d56a4d7a087, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:14,849 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314849"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801314849"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801314849"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801314849"}]},"ts":"1689801314849"} 2023-07-19 21:15:14,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:14,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b49422fa76f2c24aeb5225bda3038e7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 21:15:14,851 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=b7ba1d3744258027fc14f38de4dfb39e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,851 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801314851"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801314851"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801314851"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801314851"}]},"ts":"1689801314851"} 2023-07-19 21:15:14,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:14,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,857 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-19 21:15:14,857 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure b7ba1d3744258027fc14f38de4dfb39e, server=jenkins-hbase4.apache.org,33985,1689801303414 in 215 msec 2023-07-19 21:15:14,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-19 21:15:14,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure c8d2c125950422b670a99d56a4d7a087, server=jenkins-hbase4.apache.org,33539,1689801303815 in 210 msec 2023-07-19 21:15:14,860 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, ASSIGN in 379 msec 2023-07-19 21:15:14,860 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, ASSIGN in 379 msec 2023-07-19 21:15:14,863 INFO [StoreOpener-53ed9ca02e58dfcd0187f0238b3416ca-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,866 INFO [StoreOpener-1b49422fa76f2c24aeb5225bda3038e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,866 DEBUG [StoreOpener-53ed9ca02e58dfcd0187f0238b3416ca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/f 2023-07-19 21:15:14,867 DEBUG [StoreOpener-53ed9ca02e58dfcd0187f0238b3416ca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/f 2023-07-19 21:15:14,868 INFO [StoreOpener-53ed9ca02e58dfcd0187f0238b3416ca-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 53ed9ca02e58dfcd0187f0238b3416ca columnFamilyName f 2023-07-19 21:15:14,868 DEBUG [StoreOpener-1b49422fa76f2c24aeb5225bda3038e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/f 2023-07-19 21:15:14,868 DEBUG [StoreOpener-1b49422fa76f2c24aeb5225bda3038e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/f 2023-07-19 21:15:14,869 INFO [StoreOpener-1b49422fa76f2c24aeb5225bda3038e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b49422fa76f2c24aeb5225bda3038e7 columnFamilyName f 2023-07-19 21:15:14,869 INFO [StoreOpener-53ed9ca02e58dfcd0187f0238b3416ca-1] regionserver.HStore(310): Store=53ed9ca02e58dfcd0187f0238b3416ca/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:14,869 INFO [StoreOpener-1b49422fa76f2c24aeb5225bda3038e7-1] regionserver.HStore(310): Store=1b49422fa76f2c24aeb5225bda3038e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:14,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:14,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:14,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:14,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:14,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b49422fa76f2c24aeb5225bda3038e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10617516320, jitterRate=-0.011166736483573914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:14,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b49422fa76f2c24aeb5225bda3038e7: 2023-07-19 21:15:14,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 53ed9ca02e58dfcd0187f0238b3416ca; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10701616800, jitterRate=-0.0033342689275741577}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:14,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 53ed9ca02e58dfcd0187f0238b3416ca: 2023-07-19 21:15:14,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca., pid=63, masterSystemTime=1689801314799 2023-07-19 21:15:14,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7., pid=62, masterSystemTime=1689801314798 2023-07-19 21:15:14,889 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=1b49422fa76f2c24aeb5225bda3038e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:14,890 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314889"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801314889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801314889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801314889"}]},"ts":"1689801314889"} 2023-07-19 21:15:14,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:14,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,892 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:14,894 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=53ed9ca02e58dfcd0187f0238b3416ca, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:14,894 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801314893"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801314893"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801314893"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801314893"}]},"ts":"1689801314893"} 2023-07-19 21:15:14,895 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-19 21:15:14,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure 1b49422fa76f2c24aeb5225bda3038e7, server=jenkins-hbase4.apache.org,33985,1689801303414 in 254 msec 2023-07-19 21:15:14,904 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, ASSIGN in 418 msec 2023-07-19 21:15:14,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-19 21:15:14,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; OpenRegionProcedure 53ed9ca02e58dfcd0187f0238b3416ca, server=jenkins-hbase4.apache.org,33539,1689801303815 in 257 msec 2023-07-19 21:15:14,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:14,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-19 21:15:14,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, ASSIGN in 428 msec 2023-07-19 21:15:14,909 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801314909"}]},"ts":"1689801314909"} 2023-07-19 21:15:14,911 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-19 21:15:14,914 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-19 21:15:14,916 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1250 sec 2023-07-19 21:15:15,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-19 21:15:15,910 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-19 21:15:15,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:15,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:15,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:15,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:15,914 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:15,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:15,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:15,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-19 21:15:15,922 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801315922"}]},"ts":"1689801315922"} 2023-07-19 21:15:15,924 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-19 21:15:15,926 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-19 21:15:15,927 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, UNASSIGN}] 2023-07-19 21:15:15,929 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, UNASSIGN 2023-07-19 21:15:15,929 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, UNASSIGN 2023-07-19 21:15:15,929 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, UNASSIGN 2023-07-19 21:15:15,930 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, UNASSIGN 2023-07-19 21:15:15,930 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, UNASSIGN 2023-07-19 21:15:15,930 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=c8d2c125950422b670a99d56a4d7a087, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:15,930 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801315930"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801315930"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801315930"}]},"ts":"1689801315930"} 2023-07-19 21:15:15,931 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=b7ba1d3744258027fc14f38de4dfb39e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:15,931 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=517f80a0dfdf49cd4b8e1711d5c380ff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:15,931 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801315931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801315931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801315931"}]},"ts":"1689801315931"} 2023-07-19 21:15:15,931 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=1b49422fa76f2c24aeb5225bda3038e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:15,931 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=53ed9ca02e58dfcd0187f0238b3416ca, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:15,931 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801315931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801315931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801315931"}]},"ts":"1689801315931"} 2023-07-19 21:15:15,931 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801315931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801315931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801315931"}]},"ts":"1689801315931"} 2023-07-19 21:15:15,931 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801315931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801315931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801315931"}]},"ts":"1689801315931"} 2023-07-19 21:15:15,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=70, state=RUNNABLE; CloseRegionProcedure c8d2c125950422b670a99d56a4d7a087, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:15,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=71, state=RUNNABLE; CloseRegionProcedure b7ba1d3744258027fc14f38de4dfb39e, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:15,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=68, state=RUNNABLE; CloseRegionProcedure 1b49422fa76f2c24aeb5225bda3038e7, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:15,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=67, state=RUNNABLE; CloseRegionProcedure 517f80a0dfdf49cd4b8e1711d5c380ff, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:15,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=69, state=RUNNABLE; CloseRegionProcedure 53ed9ca02e58dfcd0187f0238b3416ca, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:16,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-19 21:15:16,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:16,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 53ed9ca02e58dfcd0187f0238b3416ca, disabling compactions & flushes 2023-07-19 21:15:16,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:16,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:16,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. after waiting 0 ms 2023-07-19 21:15:16,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:16,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:16,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7ba1d3744258027fc14f38de4dfb39e, disabling compactions & flushes 2023-07-19 21:15:16,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:16,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:16,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. after waiting 0 ms 2023-07-19 21:15:16,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:16,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:16,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca. 2023-07-19 21:15:16,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 53ed9ca02e58dfcd0187f0238b3416ca: 2023-07-19 21:15:16,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:16,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:16,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8d2c125950422b670a99d56a4d7a087, disabling compactions & flushes 2023-07-19 21:15:16,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:16,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:16,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. after waiting 0 ms 2023-07-19 21:15:16,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:16,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:16,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e. 2023-07-19 21:15:16,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7ba1d3744258027fc14f38de4dfb39e: 2023-07-19 21:15:16,111 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=53ed9ca02e58dfcd0187f0238b3416ca, regionState=CLOSED 2023-07-19 21:15:16,111 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801316111"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801316111"}]},"ts":"1689801316111"} 2023-07-19 21:15:16,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:16,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:16,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b49422fa76f2c24aeb5225bda3038e7, disabling compactions & flushes 2023-07-19 21:15:16,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:16,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:16,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. after waiting 0 ms 2023-07-19 21:15:16,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:16,116 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=b7ba1d3744258027fc14f38de4dfb39e, regionState=CLOSED 2023-07-19 21:15:16,116 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801316116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801316116"}]},"ts":"1689801316116"} 2023-07-19 21:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:16,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087. 2023-07-19 21:15:16,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8d2c125950422b670a99d56a4d7a087: 2023-07-19 21:15:16,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7. 2023-07-19 21:15:16,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b49422fa76f2c24aeb5225bda3038e7: 2023-07-19 21:15:16,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=69 2023-07-19 21:15:16,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=71 2023-07-19 21:15:16,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=69, state=SUCCESS; CloseRegionProcedure 53ed9ca02e58dfcd0187f0238b3416ca, server=jenkins-hbase4.apache.org,33539,1689801303815 in 167 msec 2023-07-19 21:15:16,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=71, state=SUCCESS; CloseRegionProcedure b7ba1d3744258027fc14f38de4dfb39e, server=jenkins-hbase4.apache.org,33985,1689801303414 in 182 msec 2023-07-19 21:15:16,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:16,128 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=c8d2c125950422b670a99d56a4d7a087, regionState=CLOSED 2023-07-19 21:15:16,128 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801316128"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801316128"}]},"ts":"1689801316128"} 2023-07-19 21:15:16,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:16,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:16,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 517f80a0dfdf49cd4b8e1711d5c380ff, disabling compactions & flushes 2023-07-19 21:15:16,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:16,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:16,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. after waiting 0 ms 2023-07-19 21:15:16,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:16,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ba1d3744258027fc14f38de4dfb39e, UNASSIGN in 199 msec 2023-07-19 21:15:16,131 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53ed9ca02e58dfcd0187f0238b3416ca, UNASSIGN in 199 msec 2023-07-19 21:15:16,132 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=1b49422fa76f2c24aeb5225bda3038e7, regionState=CLOSED 2023-07-19 21:15:16,132 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689801316131"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801316131"}]},"ts":"1689801316131"} 2023-07-19 21:15:16,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:16,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff. 2023-07-19 21:15:16,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 517f80a0dfdf49cd4b8e1711d5c380ff: 2023-07-19 21:15:16,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:16,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=70 2023-07-19 21:15:16,142 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=517f80a0dfdf49cd4b8e1711d5c380ff, regionState=CLOSED 2023-07-19 21:15:16,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=70, state=SUCCESS; CloseRegionProcedure c8d2c125950422b670a99d56a4d7a087, server=jenkins-hbase4.apache.org,33539,1689801303815 in 199 msec 2023-07-19 21:15:16,142 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689801316142"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801316142"}]},"ts":"1689801316142"} 2023-07-19 21:15:16,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=68 2023-07-19 21:15:16,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=68, state=SUCCESS; CloseRegionProcedure 1b49422fa76f2c24aeb5225bda3038e7, server=jenkins-hbase4.apache.org,33985,1689801303414 in 203 msec 2023-07-19 21:15:16,147 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8d2c125950422b670a99d56a4d7a087, UNASSIGN in 215 msec 2023-07-19 21:15:16,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b49422fa76f2c24aeb5225bda3038e7, UNASSIGN in 216 msec 2023-07-19 21:15:16,148 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=67 2023-07-19 21:15:16,148 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=67, state=SUCCESS; CloseRegionProcedure 517f80a0dfdf49cd4b8e1711d5c380ff, server=jenkins-hbase4.apache.org,33985,1689801303414 in 206 msec 2023-07-19 21:15:16,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=66 2023-07-19 21:15:16,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517f80a0dfdf49cd4b8e1711d5c380ff, UNASSIGN in 221 msec 2023-07-19 21:15:16,151 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801316151"}]},"ts":"1689801316151"} 2023-07-19 21:15:16,154 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-19 21:15:16,156 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-19 21:15:16,167 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 243 msec 2023-07-19 21:15:16,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-19 21:15:16,224 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-19 21:15:16,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,239 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_268583540' 2023-07-19 21:15:16,241 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:16,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:16,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-19 21:15:16,257 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:16,257 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:16,257 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:16,257 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:16,257 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:16,261 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/recovered.edits] 2023-07-19 21:15:16,261 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/recovered.edits] 2023-07-19 21:15:16,262 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/recovered.edits] 2023-07-19 21:15:16,262 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/recovered.edits] 2023-07-19 21:15:16,262 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/recovered.edits] 2023-07-19 21:15:16,279 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca/recovered.edits/4.seqid 2023-07-19 21:15:16,280 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff/recovered.edits/4.seqid 2023-07-19 21:15:16,280 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087/recovered.edits/4.seqid 2023-07-19 21:15:16,280 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53ed9ca02e58dfcd0187f0238b3416ca 2023-07-19 21:15:16,281 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8d2c125950422b670a99d56a4d7a087 2023-07-19 21:15:16,284 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7/recovered.edits/4.seqid 2023-07-19 21:15:16,284 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517f80a0dfdf49cd4b8e1711d5c380ff 2023-07-19 21:15:16,285 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b49422fa76f2c24aeb5225bda3038e7 2023-07-19 21:15:16,285 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e/recovered.edits/4.seqid 2023-07-19 21:15:16,291 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ba1d3744258027fc14f38de4dfb39e 2023-07-19 21:15:16,291 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-19 21:15:16,294 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,307 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-19 21:15:16,310 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-19 21:15:16,311 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,311 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-19 21:15:16,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801316312"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801316312"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801316312"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801316312"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801316312"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,315 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 21:15:16,315 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 517f80a0dfdf49cd4b8e1711d5c380ff, NAME => 'Group_testTableMoveTruncateAndDrop,,1689801313842.517f80a0dfdf49cd4b8e1711d5c380ff.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1b49422fa76f2c24aeb5225bda3038e7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689801313842.1b49422fa76f2c24aeb5225bda3038e7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 53ed9ca02e58dfcd0187f0238b3416ca, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689801313842.53ed9ca02e58dfcd0187f0238b3416ca.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c8d2c125950422b670a99d56a4d7a087, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689801313842.c8d2c125950422b670a99d56a4d7a087.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b7ba1d3744258027fc14f38de4dfb39e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689801313842.b7ba1d3744258027fc14f38de4dfb39e.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 21:15:16,315 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-19 21:15:16,315 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801316315"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:16,317 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-19 21:15:16,320 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-19 21:15:16,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 89 msec 2023-07-19 21:15:16,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-19 21:15:16,357 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-19 21:15:16,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:16,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup default 2023-07-19 21:15:16,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:16,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:16,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_268583540, current retry=0 2023-07-19 21:15:16,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:16,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_268583540 => default 2023-07-19 21:15:16,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_268583540 2023-07-19 21:15:16,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:16,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:16,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:16,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,402 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:16,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:16,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:16,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:16,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 151 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802516417, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:16,418 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:16,421 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:16,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,422 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:16,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:16,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,449 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=505 (was 424) Potentially hanging thread: hconnection-0x2a756921-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1076726239-639-acceptor-0@1a7634f5-ServerConnector@3a94b5f4{HTTP/1.1, (http/1.1)}{0.0.0.0:37915} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:56582 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40615 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769-prefix:jenkins-hbase4.apache.org,43325,1689801307487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769-prefix:jenkins-hbase4.apache.org,43325,1689801307487.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:44644 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58627@0x2376f5c7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43325Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:52506 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:44686 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:56610 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1076726239-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43325-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:52546 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58627@0x2376f5c7-SendThread(127.0.0.1:58627) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43325 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-628590627_17 at /127.0.0.1:46820 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1076726239-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2a756921-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2082917263_17 at /127.0.0.1:57942 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58627@0x2376f5c7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-16da31c0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:40615 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=817 (was 681) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=369 (was 340) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2997 (was 2873) - AvailableMemoryMB LEAK? - 2023-07-19 21:15:16,450 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-19 21:15:16,468 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=369, ProcessCount=176, AvailableMemoryMB=2996 2023-07-19 21:15:16,469 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-19 21:15:16,471 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-19 21:15:16,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:16,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:16,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:16,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,496 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:16,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:16,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:16,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:16,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 179 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802516517, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:16,518 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:16,520 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:16,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,522 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:16,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:16,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-19 21:15:16,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:33664 deadline: 1689802516524, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 21:15:16,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-19 21:15:16,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:33664 deadline: 1689802516526, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 21:15:16,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-19 21:15:16,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 189 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:33664 deadline: 1689802516528, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-19 21:15:16,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-19 21:15:16,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-19 21:15:16,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:16,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:16,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-19 21:15:16,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:16,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:16,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:16,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:16,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,577 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:16,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:16,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:16,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:16,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 223 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802516594, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:16,595 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:16,597 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:16,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,598 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:16,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:16,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,624 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x63197ba-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=369 (was 369), ProcessCount=176 (was 176), AvailableMemoryMB=2986 (was 2996) 2023-07-19 21:15:16,624 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-19 21:15:16,647 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=369, ProcessCount=176, AvailableMemoryMB=2982 2023-07-19 21:15:16,647 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-19 21:15:16,647 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-19 21:15:16,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:16,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:16,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:16,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:16,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:16,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:16,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:16,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:16,685 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:16,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:16,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:16,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:16,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:16,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 251 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802516701, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:16,702 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:16,704 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:16,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,705 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:16,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:16,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:16,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:16,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-19 21:15:16,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:16,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:16,715 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 21:15:16,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:16,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:16,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:16,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325] to rsgroup bar 2023-07-19 21:15:16,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:16,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:16,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:16,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:16,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(238): Moving server region 1934a6e0c77f024959d2c8636ae430b9, which do not belong to RSGroup bar 2023-07-19 21:15:16,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE 2023-07-19 21:15:16,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-19 21:15:16,738 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE 2023-07-19 21:15:16,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 21:15:16,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-19 21:15:16,739 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:16,740 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-19 21:15:16,740 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801316739"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801316739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801316739"}]},"ts":"1689801316739"} 2023-07-19 21:15:16,741 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43325,1689801307487, state=CLOSING 2023-07-19 21:15:16,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:16,748 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:16,749 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:16,749 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=79, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:16,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-19 21:15:16,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:16,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1934a6e0c77f024959d2c8636ae430b9, disabling compactions & flushes 2023-07-19 21:15:16,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:16,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:16,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. after waiting 0 ms 2023-07-19 21:15:16,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:16,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1934a6e0c77f024959d2c8636ae430b9 1/1 column families, dataSize=4.98 KB heapSize=8.39 KB 2023-07-19 21:15:16,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:16,898 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:16,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:16,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:16,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:16,898 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=39.10 KB heapSize=60.12 KB 2023-07-19 21:15:16,974 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=36.21 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:16,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.98 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:16,986 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:16,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:17,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/3ea852df9c774ec5874734afd6301aa5 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:17,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:17,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/3ea852df9c774ec5874734afd6301aa5, entries=9, sequenceid=32, filesize=5.5 K 2023-07-19 21:15:17,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.98 KB/5100, heapSize ~8.38 KB/8576, currentSize=0 B/0 for 1934a6e0c77f024959d2c8636ae430b9 in 123ms, sequenceid=32, compaction requested=false 2023-07-19 21:15:17,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-19 21:15:17,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:17,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:17,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:17,048 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/rep_barrier/200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1934a6e0c77f024959d2c8636ae430b9 move to jenkins-hbase4.apache.org,45225,1689801303640 record at close sequenceid=32 2023-07-19 21:15:17,052 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:17,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:17,061 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,105 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,116 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/0ff87909fff043ee91857866cd8accb0 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:17,128 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:17,129 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/0ff87909fff043ee91857866cd8accb0, entries=31, sequenceid=101, filesize=8.4 K 2023-07-19 21:15:17,130 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/rep_barrier/200a3a1517c44fd7afeb7c56bf918e77 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier/200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier/200a3a1517c44fd7afeb7c56bf918e77, entries=10, sequenceid=101, filesize=6.1 K 2023-07-19 21:15:17,140 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/84d6c7a7b3024dc0b7a442e303a057e6 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,149 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/84d6c7a7b3024dc0b7a442e303a057e6, entries=11, sequenceid=101, filesize=6.0 K 2023-07-19 21:15:17,153 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~39.10 KB/40037, heapSize ~60.07 KB/61512, currentSize=0 B/0 for 1588230740 in 255ms, sequenceid=101, compaction requested=false 2023-07-19 21:15:17,169 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=18 2023-07-19 21:15:17,170 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:17,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:17,171 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:17,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45225,1689801303640 record at close sequenceid=101 2023-07-19 21:15:17,175 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-19 21:15:17,182 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-19 21:15:17,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=79 2023-07-19 21:15:17,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=79, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43325,1689801307487 in 433 msec 2023-07-19 21:15:17,192 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:17,344 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45225,1689801303640, state=OPENING 2023-07-19 21:15:17,345 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:17,346 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=79, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:17,346 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:17,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 21:15:17,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:17,505 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45225%2C1689801303640.meta, suffix=.meta, logDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640, archiveDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs, maxLogs=32 2023-07-19 21:15:17,522 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK] 2023-07-19 21:15:17,522 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK] 2023-07-19 21:15:17,523 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK] 2023-07-19 21:15:17,525 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640/jenkins-hbase4.apache.org%2C45225%2C1689801303640.meta.1689801317506.meta 2023-07-19 21:15:17,525 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43895,DS-4af41a0e-57fd-4734-935f-88dc86c0119f,DISK], DatanodeInfoWithStorage[127.0.0.1:37103,DS-7f25dcb0-5556-4324-834d-aa6465a78e8b,DISK], DatanodeInfoWithStorage[127.0.0.1:36045,DS-55b2d5e1-15e1-4ad2-8ef4-044af8c5c779,DISK]] 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 21:15:17,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 21:15:17,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 21:15:17,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:17,531 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:17,531 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info 2023-07-19 21:15:17,531 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:17,538 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:17,538 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/0ff87909fff043ee91857866cd8accb0 2023-07-19 21:15:17,544 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/8c9ad1f4eea141428139ceb6f35d4f6d 2023-07-19 21:15:17,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:17,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:17,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:17,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:17,546 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:17,551 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,551 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier/200a3a1517c44fd7afeb7c56bf918e77 2023-07-19 21:15:17,551 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:17,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:17,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:17,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table 2023-07-19 21:15:17,553 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:17,568 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,569 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/84d6c7a7b3024dc0b7a442e303a057e6 2023-07-19 21:15:17,574 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/aea4f741c841413985d15932099557f9 2023-07-19 21:15:17,575 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:17,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:17,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740 2023-07-19 21:15:17,586 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:17,588 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:17,589 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=105; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9961382880, jitterRate=-0.07227392494678497}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:17,589 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:17,590 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=82, masterSystemTime=1689801317499 2023-07-19 21:15:17,599 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 21:15:17,599 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 21:15:17,600 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45225,1689801303640, state=OPEN 2023-07-19 21:15:17,601 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:17,601 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:17,602 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=CLOSED 2023-07-19 21:15:17,602 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801317602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801317602"}]},"ts":"1689801317602"} 2023-07-19 21:15:17,603 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43325] ipc.CallRunner(144): callId: 188 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:34966 deadline: 1689801377602, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45225 startCode=1689801303640. As of locationSeqNum=101. 2023-07-19 21:15:17,603 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=79 2023-07-19 21:15:17,603 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=79, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45225,1689801303640 in 256 msec 2023-07-19 21:15:17,605 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 865 msec 2023-07-19 21:15:17,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-19 21:15:17,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; CloseRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,43325,1689801307487 in 963 msec 2023-07-19 21:15:17,708 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:17,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-19 21:15:17,858 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:17,859 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801317858"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801317858"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801317858"}]},"ts":"1689801317858"} 2023-07-19 21:15:17,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=78, state=RUNNABLE; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:18,020 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1934a6e0c77f024959d2c8636ae430b9, NAME => 'hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. service=MultiRowMutationService 2023-07-19 21:15:18,021 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,023 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,024 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:18,024 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m 2023-07-19 21:15:18,025 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1934a6e0c77f024959d2c8636ae430b9 columnFamilyName m 2023-07-19 21:15:18,037 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:18,037 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/3ea852df9c774ec5874734afd6301aa5 2023-07-19 21:15:18,045 DEBUG [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(539): loaded hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/6998149e1ee246f6b75ccb6dbcfc034a 2023-07-19 21:15:18,046 INFO [StoreOpener-1934a6e0c77f024959d2c8636ae430b9-1] regionserver.HStore(310): Store=1934a6e0c77f024959d2c8636ae430b9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:18,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:18,054 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1934a6e0c77f024959d2c8636ae430b9; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5cc608ba, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:18,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:18,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9., pid=83, masterSystemTime=1689801318015 2023-07-19 21:15:18,057 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:18,057 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:18,057 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1934a6e0c77f024959d2c8636ae430b9, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:18,058 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801318057"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801318057"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801318057"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801318057"}]},"ts":"1689801318057"} 2023-07-19 21:15:18,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=78 2023-07-19 21:15:18,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=78, state=SUCCESS; OpenRegionProcedure 1934a6e0c77f024959d2c8636ae430b9, server=jenkins-hbase4.apache.org,45225,1689801303640 in 199 msec 2023-07-19 21:15:18,063 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=1934a6e0c77f024959d2c8636ae430b9, REOPEN/MOVE in 1.3270 sec 2023-07-19 21:15:18,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414, jenkins-hbase4.apache.org,43325,1689801307487] are moved back to default 2023-07-19 21:15:18,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-19 21:15:18,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:18,742 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43325] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:34980 deadline: 1689801378742, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45225 startCode=1689801303640. As of locationSeqNum=32. 2023-07-19 21:15:18,843 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43325] ipc.CallRunner(144): callId: 15 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34980 deadline: 1689801378843, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45225 startCode=1689801303640. As of locationSeqNum=101. 2023-07-19 21:15:18,944 DEBUG [hconnection-0x63197ba-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:18,947 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:18,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:18,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:18,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 21:15:18,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:18,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:18,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:18,968 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:18,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-19 21:15:18,968 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43325] ipc.CallRunner(144): callId: 193 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:34966 deadline: 1689801378968, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45225 startCode=1689801303640. As of locationSeqNum=32. 2023-07-19 21:15:18,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 21:15:19,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 21:15:19,073 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:19,074 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:19,074 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:19,074 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:19,083 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:19,085 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,086 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 empty. 2023-07-19 21:15:19,086 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,086 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 21:15:19,109 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:19,111 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 696aafaf90f70fcd78c29086c83e1571, NAME => 'Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 696aafaf90f70fcd78c29086c83e1571, disabling compactions & flushes 2023-07-19 21:15:19,123 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. after waiting 0 ms 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,123 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,123 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:19,125 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:19,126 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801319126"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801319126"}]},"ts":"1689801319126"} 2023-07-19 21:15:19,128 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:19,129 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:19,129 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801319129"}]},"ts":"1689801319129"} 2023-07-19 21:15:19,130 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-19 21:15:19,134 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, ASSIGN}] 2023-07-19 21:15:19,138 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, ASSIGN 2023-07-19 21:15:19,139 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:19,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 21:15:19,291 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:19,291 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801319291"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801319291"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801319291"}]},"ts":"1689801319291"} 2023-07-19 21:15:19,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:19,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 696aafaf90f70fcd78c29086c83e1571, NAME => 'Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:19,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:19,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,456 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,457 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:19,458 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:19,458 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 696aafaf90f70fcd78c29086c83e1571 columnFamilyName f 2023-07-19 21:15:19,459 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(310): Store=696aafaf90f70fcd78c29086c83e1571/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:19,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:19,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 696aafaf90f70fcd78c29086c83e1571; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10337811360, jitterRate=-0.03721629083156586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:19,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:19,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571., pid=86, masterSystemTime=1689801319448 2023-07-19 21:15:19,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,469 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:19,469 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801319469"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801319469"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801319469"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801319469"}]},"ts":"1689801319469"} 2023-07-19 21:15:19,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-19 21:15:19,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640 in 177 msec 2023-07-19 21:15:19,478 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-19 21:15:19,478 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, ASSIGN in 339 msec 2023-07-19 21:15:19,479 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:19,479 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801319479"}]},"ts":"1689801319479"} 2023-07-19 21:15:19,481 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-19 21:15:19,484 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:19,485 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 519 msec 2023-07-19 21:15:19,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-19 21:15:19,573 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-19 21:15:19,574 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-19 21:15:19,574 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:19,577 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43325] ipc.CallRunner(144): callId: 280 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:50036 deadline: 1689801379577, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45225 startCode=1689801303640. As of locationSeqNum=101. 2023-07-19 21:15:19,679 DEBUG [hconnection-0x2231fec8-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:19,681 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:19,690 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-19 21:15:19,690 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:19,690 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-19 21:15:19,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-19 21:15:19,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:19,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:19,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:19,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:19,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-19 21:15:19,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 696aafaf90f70fcd78c29086c83e1571 to RSGroup bar 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 21:15:19,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:19,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE 2023-07-19 21:15:19,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-19 21:15:19,701 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE 2023-07-19 21:15:19,702 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:19,702 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801319702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801319702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801319702"}]},"ts":"1689801319702"} 2023-07-19 21:15:19,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:19,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 696aafaf90f70fcd78c29086c83e1571, disabling compactions & flushes 2023-07-19 21:15:19,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. after waiting 0 ms 2023-07-19 21:15:19,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:19,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:19,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:19,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 696aafaf90f70fcd78c29086c83e1571 move to jenkins-hbase4.apache.org,33985,1689801303414 record at close sequenceid=2 2023-07-19 21:15:19,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:19,872 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSED 2023-07-19 21:15:19,872 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801319872"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801319872"}]},"ts":"1689801319872"} 2023-07-19 21:15:19,877 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-19 21:15:19,877 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640 in 171 msec 2023-07-19 21:15:19,879 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:20,029 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:20,030 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:20,031 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801320030"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801320030"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801320030"}]},"ts":"1689801320030"} 2023-07-19 21:15:20,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:20,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 696aafaf90f70fcd78c29086c83e1571, NAME => 'Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:20,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:20,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,193 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,195 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:20,195 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:20,195 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 696aafaf90f70fcd78c29086c83e1571 columnFamilyName f 2023-07-19 21:15:20,196 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(310): Store=696aafaf90f70fcd78c29086c83e1571/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:20,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 696aafaf90f70fcd78c29086c83e1571; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11266091840, jitterRate=0.049236565828323364}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:20,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:20,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571., pid=89, masterSystemTime=1689801320187 2023-07-19 21:15:20,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,209 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:20,209 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801320208"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801320208"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801320208"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801320208"}]},"ts":"1689801320208"} 2023-07-19 21:15:20,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-19 21:15:20,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,33985,1689801303414 in 178 msec 2023-07-19 21:15:20,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE in 514 msec 2023-07-19 21:15:20,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-19 21:15:20,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-19 21:15:20,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:20,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:20,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:20,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-19 21:15:20,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:20,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 21:15:20,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:20,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 290 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:33664 deadline: 1689802520708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-19 21:15:20,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325] to rsgroup default 2023-07-19 21:15:20,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:20,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 292 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:33664 deadline: 1689802520709, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-19 21:15:20,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-19 21:15:20,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:20,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:20,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:20,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:20,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-19 21:15:20,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 696aafaf90f70fcd78c29086c83e1571 to RSGroup default 2023-07-19 21:15:20,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE 2023-07-19 21:15:20,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 21:15:20,719 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE 2023-07-19 21:15:20,719 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:20,720 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801320719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801320719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801320719"}]},"ts":"1689801320719"} 2023-07-19 21:15:20,721 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:20,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 696aafaf90f70fcd78c29086c83e1571, disabling compactions & flushes 2023-07-19 21:15:20,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. after waiting 0 ms 2023-07-19 21:15:20,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:20,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:20,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:20,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 696aafaf90f70fcd78c29086c83e1571 move to jenkins-hbase4.apache.org,45225,1689801303640 record at close sequenceid=5 2023-07-19 21:15:20,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:20,885 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSED 2023-07-19 21:15:20,885 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801320885"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801320885"}]},"ts":"1689801320885"} 2023-07-19 21:15:20,889 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-19 21:15:20,889 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,33985,1689801303414 in 166 msec 2023-07-19 21:15:20,890 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:21,040 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:21,040 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801321040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801321040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801321040"}]},"ts":"1689801321040"} 2023-07-19 21:15:21,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:21,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:21,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 696aafaf90f70fcd78c29086c83e1571, NAME => 'Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:21,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:21,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,201 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,203 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:21,203 DEBUG [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f 2023-07-19 21:15:21,204 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 696aafaf90f70fcd78c29086c83e1571 columnFamilyName f 2023-07-19 21:15:21,205 INFO [StoreOpener-696aafaf90f70fcd78c29086c83e1571-1] regionserver.HStore(310): Store=696aafaf90f70fcd78c29086c83e1571/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:21,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:21,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 696aafaf90f70fcd78c29086c83e1571; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10587929280, jitterRate=-0.013922244310379028}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:21,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:21,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571., pid=92, masterSystemTime=1689801321195 2023-07-19 21:15:21,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:21,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:21,215 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:21,215 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801321214"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801321214"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801321214"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801321214"}]},"ts":"1689801321214"} 2023-07-19 21:15:21,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-19 21:15:21,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640 in 174 msec 2023-07-19 21:15:21,219 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, REOPEN/MOVE in 501 msec 2023-07-19 21:15:21,629 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 21:15:21,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-19 21:15:21,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-19 21:15:21,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:21,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:21,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:21,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 21:15:21,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:21,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 299 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:33664 deadline: 1689802521835, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-19 21:15:21,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325] to rsgroup default 2023-07-19 21:15:21,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:21,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-19 21:15:21,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:21,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:21,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-19 21:15:21,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414, jenkins-hbase4.apache.org,43325,1689801307487] are moved back to bar 2023-07-19 21:15:21,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-19 21:15:21,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:21,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:21,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-19 21:15:21,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:21,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:21,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:21,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:21,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:21,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:21,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:21,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:21,865 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-19 21:15:21,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-19 21:15:21,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:21,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 21:15:21,869 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801321869"}]},"ts":"1689801321869"} 2023-07-19 21:15:21,871 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-19 21:15:21,873 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-19 21:15:21,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, UNASSIGN}] 2023-07-19 21:15:21,875 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, UNASSIGN 2023-07-19 21:15:21,876 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:21,876 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801321876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801321876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801321876"}]},"ts":"1689801321876"} 2023-07-19 21:15:21,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:21,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 21:15:22,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:22,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 696aafaf90f70fcd78c29086c83e1571, disabling compactions & flushes 2023-07-19 21:15:22,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:22,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:22,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. after waiting 0 ms 2023-07-19 21:15:22,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:22,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 21:15:22,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571. 2023-07-19 21:15:22,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 696aafaf90f70fcd78c29086c83e1571: 2023-07-19 21:15:22,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:22,040 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=696aafaf90f70fcd78c29086c83e1571, regionState=CLOSED 2023-07-19 21:15:22,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689801322040"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801322040"}]},"ts":"1689801322040"} 2023-07-19 21:15:22,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-19 21:15:22,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 696aafaf90f70fcd78c29086c83e1571, server=jenkins-hbase4.apache.org,45225,1689801303640 in 165 msec 2023-07-19 21:15:22,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-19 21:15:22,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=696aafaf90f70fcd78c29086c83e1571, UNASSIGN in 171 msec 2023-07-19 21:15:22,052 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801322052"}]},"ts":"1689801322052"} 2023-07-19 21:15:22,053 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-19 21:15:22,056 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-19 21:15:22,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 191 msec 2023-07-19 21:15:22,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-19 21:15:22,172 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-19 21:15:22,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-19 21:15:22,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,175 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-19 21:15:22,176 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:22,180 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:22,182 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits] 2023-07-19 21:15:22,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-19 21:15:22,189 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/10.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571/recovered.edits/10.seqid 2023-07-19 21:15:22,189 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testFailRemoveGroup/696aafaf90f70fcd78c29086c83e1571 2023-07-19 21:15:22,189 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-19 21:15:22,192 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,195 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-19 21:15:22,198 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-19 21:15:22,199 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,199 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-19 21:15:22,200 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801322200"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:22,206 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 21:15:22,206 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 696aafaf90f70fcd78c29086c83e1571, NAME => 'Group_testFailRemoveGroup,,1689801318964.696aafaf90f70fcd78c29086c83e1571.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 21:15:22,206 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-19 21:15:22,206 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801322206"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:22,222 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-19 21:15:22,224 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-19 21:15:22,225 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 52 msec 2023-07-19 21:15:22,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-19 21:15:22,285 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-19 21:15:22,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:22,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:22,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:22,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:22,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:22,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:22,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:22,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:22,303 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:22,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:22,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:22,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:22,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:22,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:22,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 347 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802522314, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:22,315 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:22,317 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:22,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,318 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:22,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:22,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:22,338 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=525 (was 507) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_753555892_17 at /127.0.0.1:57992 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_753555892_17 at /127.0.0.1:46960 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2042935688_17 at /127.0.0.1:46930 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2231fec8-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2042935688_17 at /127.0.0.1:40088 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2042935688_17 at /127.0.0.1:57988 [Receiving block BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2042935688_17 at /127.0.0.1:40106 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769-prefix:jenkins-hbase4.apache.org,45225,1689801303640.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1020996147-172.31.14.131-1689801298089:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=824 (was 817) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 369), ProcessCount=176 (was 176), AvailableMemoryMB=2773 (was 2982) 2023-07-19 21:15:22,338 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-19 21:15:22,356 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=525, OpenFileDescriptor=824, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=176, AvailableMemoryMB=2772 2023-07-19 21:15:22,356 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-19 21:15:22,356 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-19 21:15:22,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:22,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:22,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:22,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:22,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:22,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:22,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:22,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:22,383 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:22,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:22,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:22,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:22,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:22,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:22,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 375 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802522408, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:22,409 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:22,415 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:22,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,418 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:22,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:22,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:22,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:22,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:22,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:22,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:22,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539] to rsgroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:22,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:22,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815] are moved back to default 2023-07-19 21:15:22,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:22,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:22,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:22,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:22,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:22,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:22,460 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:22,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-19 21:15:22,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 21:15:22,463 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:22,464 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:22,464 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:22,465 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:22,474 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:22,476 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,477 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e empty. 2023-07-19 21:15:22,477 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,477 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 21:15:22,529 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:22,532 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => f675e2914e4d3c58ccbbf79c7061146e, NAME => 'GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:22,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 21:15:22,573 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:22,573 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing f675e2914e4d3c58ccbbf79c7061146e, disabling compactions & flushes 2023-07-19 21:15:22,574 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,574 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,574 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. after waiting 0 ms 2023-07-19 21:15:22,574 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,574 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,574 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for f675e2914e4d3c58ccbbf79c7061146e: 2023-07-19 21:15:22,577 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:22,578 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801322578"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801322578"}]},"ts":"1689801322578"} 2023-07-19 21:15:22,580 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:22,581 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:22,581 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801322581"}]},"ts":"1689801322581"} 2023-07-19 21:15:22,583 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-19 21:15:22,588 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:22,588 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:22,588 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:22,588 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:22,588 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:22,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, ASSIGN}] 2023-07-19 21:15:22,591 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, ASSIGN 2023-07-19 21:15:22,592 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:22,742 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:22,744 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:22,744 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801322744"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801322744"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801322744"}]},"ts":"1689801322744"} 2023-07-19 21:15:22,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:22,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 21:15:22,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f675e2914e4d3c58ccbbf79c7061146e, NAME => 'GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:22,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:22,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,906 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,908 DEBUG [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/f 2023-07-19 21:15:22,908 DEBUG [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/f 2023-07-19 21:15:22,908 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f675e2914e4d3c58ccbbf79c7061146e columnFamilyName f 2023-07-19 21:15:22,909 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] regionserver.HStore(310): Store=f675e2914e4d3c58ccbbf79c7061146e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:22,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:22,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:22,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f675e2914e4d3c58ccbbf79c7061146e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9773822400, jitterRate=-0.08974185585975647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:22,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f675e2914e4d3c58ccbbf79c7061146e: 2023-07-19 21:15:22,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e., pid=99, masterSystemTime=1689801322898 2023-07-19 21:15:22,928 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:22,928 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801322928"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801322928"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801322928"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801322928"}]},"ts":"1689801322928"} 2023-07-19 21:15:22,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:22,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-19 21:15:22,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,45225,1689801303640 in 184 msec 2023-07-19 21:15:22,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-19 21:15:22,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, ASSIGN in 348 msec 2023-07-19 21:15:22,941 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:22,941 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801322941"}]},"ts":"1689801322941"} 2023-07-19 21:15:22,943 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-19 21:15:22,946 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:22,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 490 msec 2023-07-19 21:15:23,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-19 21:15:23,067 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-19 21:15:23,067 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-19 21:15:23,067 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:23,077 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-19 21:15:23,077 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:23,077 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-19 21:15:23,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:23,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:23,083 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:23,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-19 21:15:23,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 21:15:23,086 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:23,087 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,087 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:23,088 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:23,091 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:23,093 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,094 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b empty. 2023-07-19 21:15:23,094 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,094 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 21:15:23,134 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:23,135 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6aadbd3b6a6d9947160d66ce2e3667b, NAME => 'GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing f6aadbd3b6a6d9947160d66ce2e3667b, disabling compactions & flushes 2023-07-19 21:15:23,149 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. after waiting 0 ms 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,149 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,149 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for f6aadbd3b6a6d9947160d66ce2e3667b: 2023-07-19 21:15:23,152 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:23,153 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323153"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801323153"}]},"ts":"1689801323153"} 2023-07-19 21:15:23,154 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:23,155 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:23,155 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801323155"}]},"ts":"1689801323155"} 2023-07-19 21:15:23,156 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-19 21:15:23,160 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:23,160 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:23,160 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:23,160 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:23,161 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:23,161 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, ASSIGN}] 2023-07-19 21:15:23,163 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, ASSIGN 2023-07-19 21:15:23,164 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:23,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 21:15:23,314 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:23,316 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:23,316 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323316"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801323316"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801323316"}]},"ts":"1689801323316"} 2023-07-19 21:15:23,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:23,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 21:15:23,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6aadbd3b6a6d9947160d66ce2e3667b, NAME => 'GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:23,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:23,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,475 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,476 DEBUG [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/f 2023-07-19 21:15:23,476 DEBUG [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/f 2023-07-19 21:15:23,476 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6aadbd3b6a6d9947160d66ce2e3667b columnFamilyName f 2023-07-19 21:15:23,477 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] regionserver.HStore(310): Store=f6aadbd3b6a6d9947160d66ce2e3667b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:23,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:23,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6aadbd3b6a6d9947160d66ce2e3667b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12008168640, jitterRate=0.11834785342216492}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6aadbd3b6a6d9947160d66ce2e3667b: 2023-07-19 21:15:23,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b., pid=102, masterSystemTime=1689801323469 2023-07-19 21:15:23,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,486 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:23,486 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323486"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801323486"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801323486"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801323486"}]},"ts":"1689801323486"} 2023-07-19 21:15:23,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-19 21:15:23,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,45225,1689801303640 in 169 msec 2023-07-19 21:15:23,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-19 21:15:23,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, ASSIGN in 328 msec 2023-07-19 21:15:23,491 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:23,492 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801323492"}]},"ts":"1689801323492"} 2023-07-19 21:15:23,493 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-19 21:15:23,496 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:23,497 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 416 msec 2023-07-19 21:15:23,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-19 21:15:23,689 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-19 21:15:23,689 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-19 21:15:23,689 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:23,694 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-19 21:15:23,694 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:23,694 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-19 21:15:23,695 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:23,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 21:15:23,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:23,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 21:15:23,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:23,710 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:23,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:23,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:23,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region f6aadbd3b6a6d9947160d66ce2e3667b to RSGroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, REOPEN/MOVE 2023-07-19 21:15:23,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region f675e2914e4d3c58ccbbf79c7061146e to RSGroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:23,722 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, REOPEN/MOVE 2023-07-19 21:15:23,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, REOPEN/MOVE 2023-07-19 21:15:23,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:23,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1172547809, current retry=0 2023-07-19 21:15:23,726 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, REOPEN/MOVE 2023-07-19 21:15:23,726 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801323723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801323723"}]},"ts":"1689801323723"} 2023-07-19 21:15:23,727 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:23,727 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323727"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801323727"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801323727"}]},"ts":"1689801323727"} 2023-07-19 21:15:23,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:23,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:23,802 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 21:15:23,803 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-19 21:15:23,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-19 21:15:23,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:23,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f675e2914e4d3c58ccbbf79c7061146e, disabling compactions & flushes 2023-07-19 21:15:23,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:23,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:23,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. after waiting 0 ms 2023-07-19 21:15:23,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:23,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:23,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:23,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f675e2914e4d3c58ccbbf79c7061146e: 2023-07-19 21:15:23,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f675e2914e4d3c58ccbbf79c7061146e move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:23,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:23,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6aadbd3b6a6d9947160d66ce2e3667b, disabling compactions & flushes 2023-07-19 21:15:23,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. after waiting 0 ms 2023-07-19 21:15:23,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,927 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=CLOSED 2023-07-19 21:15:23,927 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323927"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801323927"}]},"ts":"1689801323927"} 2023-07-19 21:15:23,933 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-19 21:15:23,933 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,45225,1689801303640 in 198 msec 2023-07-19 21:15:23,934 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:23,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:23,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:23,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6aadbd3b6a6d9947160d66ce2e3667b: 2023-07-19 21:15:23,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f6aadbd3b6a6d9947160d66ce2e3667b move to jenkins-hbase4.apache.org,33539,1689801303815 record at close sequenceid=2 2023-07-19 21:15:23,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:23,957 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=CLOSED 2023-07-19 21:15:23,957 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801323957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801323957"}]},"ts":"1689801323957"} 2023-07-19 21:15:23,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-19 21:15:23,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,45225,1689801303640 in 231 msec 2023-07-19 21:15:23,967 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33539,1689801303815; forceNewPlan=false, retain=false 2023-07-19 21:15:24,085 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:24,085 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:24,085 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324085"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801324085"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801324085"}]},"ts":"1689801324085"} 2023-07-19 21:15:24,085 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324085"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801324085"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801324085"}]},"ts":"1689801324085"} 2023-07-19 21:15:24,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=104, state=RUNNABLE; OpenRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:24,091 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=103, state=RUNNABLE; OpenRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:24,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:24,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6aadbd3b6a6d9947160d66ce2e3667b, NAME => 'GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:24,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:24,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,249 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,252 DEBUG [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/f 2023-07-19 21:15:24,252 DEBUG [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/f 2023-07-19 21:15:24,253 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6aadbd3b6a6d9947160d66ce2e3667b columnFamilyName f 2023-07-19 21:15:24,255 INFO [StoreOpener-f6aadbd3b6a6d9947160d66ce2e3667b-1] regionserver.HStore(310): Store=f6aadbd3b6a6d9947160d66ce2e3667b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:24,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:24,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6aadbd3b6a6d9947160d66ce2e3667b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11148446720, jitterRate=0.03828001022338867}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:24,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6aadbd3b6a6d9947160d66ce2e3667b: 2023-07-19 21:15:24,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b., pid=108, masterSystemTime=1689801324239 2023-07-19 21:15:24,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:24,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:24,270 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:24,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,271 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324270"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801324270"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801324270"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801324270"}]},"ts":"1689801324270"} 2023-07-19 21:15:24,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f675e2914e4d3c58ccbbf79c7061146e, NAME => 'GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:24,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:24,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,274 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=103 2023-07-19 21:15:24,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=103, state=SUCCESS; OpenRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,33539,1689801303815 in 181 msec 2023-07-19 21:15:24,275 DEBUG [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/f 2023-07-19 21:15:24,275 DEBUG [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/f 2023-07-19 21:15:24,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, REOPEN/MOVE in 554 msec 2023-07-19 21:15:24,276 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f675e2914e4d3c58ccbbf79c7061146e columnFamilyName f 2023-07-19 21:15:24,276 INFO [StoreOpener-f675e2914e4d3c58ccbbf79c7061146e-1] regionserver.HStore(310): Store=f675e2914e4d3c58ccbbf79c7061146e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:24,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f675e2914e4d3c58ccbbf79c7061146e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11866712800, jitterRate=0.1051737517118454}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:24,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f675e2914e4d3c58ccbbf79c7061146e: 2023-07-19 21:15:24,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e., pid=107, masterSystemTime=1689801324239 2023-07-19 21:15:24,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,293 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:24,293 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324293"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801324293"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801324293"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801324293"}]},"ts":"1689801324293"} 2023-07-19 21:15:24,297 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=104 2023-07-19 21:15:24,297 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=104, state=SUCCESS; OpenRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,33539,1689801303815 in 208 msec 2023-07-19 21:15:24,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, REOPEN/MOVE in 574 msec 2023-07-19 21:15:24,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-19 21:15:24,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1172547809. 2023-07-19 21:15:24,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:24,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:24,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:24,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-19 21:15:24,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:24,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-19 21:15:24,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:24,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:24,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:24,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1172547809 2023-07-19 21:15:24,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:24,741 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-19 21:15:24,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-19 21:15:24,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:24,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 21:15:24,751 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801324751"}]},"ts":"1689801324751"} 2023-07-19 21:15:24,753 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-19 21:15:24,755 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-19 21:15:24,756 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, UNASSIGN}] 2023-07-19 21:15:24,759 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, UNASSIGN 2023-07-19 21:15:24,761 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:24,761 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801324761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801324761"}]},"ts":"1689801324761"} 2023-07-19 21:15:24,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:24,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 21:15:24,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f675e2914e4d3c58ccbbf79c7061146e, disabling compactions & flushes 2023-07-19 21:15:24,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. after waiting 0 ms 2023-07-19 21:15:24,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:24,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e. 2023-07-19 21:15:24,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f675e2914e4d3c58ccbbf79c7061146e: 2023-07-19 21:15:24,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:24,962 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=f675e2914e4d3c58ccbbf79c7061146e, regionState=CLOSED 2023-07-19 21:15:24,962 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801324962"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801324962"}]},"ts":"1689801324962"} 2023-07-19 21:15:24,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-19 21:15:24,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure f675e2914e4d3c58ccbbf79c7061146e, server=jenkins-hbase4.apache.org,33539,1689801303815 in 199 msec 2023-07-19 21:15:24,971 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-19 21:15:24,971 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f675e2914e4d3c58ccbbf79c7061146e, UNASSIGN in 213 msec 2023-07-19 21:15:24,974 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801324974"}]},"ts":"1689801324974"} 2023-07-19 21:15:24,979 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-19 21:15:24,981 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-19 21:15:24,983 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 240 msec 2023-07-19 21:15:25,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-19 21:15:25,053 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-19 21:15:25,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-19 21:15:25,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,058 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1172547809' 2023-07-19 21:15:25,059 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,065 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:25,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:25,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,067 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits] 2023-07-19 21:15:25,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-19 21:15:25,075 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e/recovered.edits/7.seqid 2023-07-19 21:15:25,076 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveA/f675e2914e4d3c58ccbbf79c7061146e 2023-07-19 21:15:25,076 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-19 21:15:25,079 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,081 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-19 21:15:25,083 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-19 21:15:25,085 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,085 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-19 21:15:25,086 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801325085"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:25,087 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 21:15:25,087 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f675e2914e4d3c58ccbbf79c7061146e, NAME => 'GrouptestMultiTableMoveA,,1689801322455.f675e2914e4d3c58ccbbf79c7061146e.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 21:15:25,087 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-19 21:15:25,088 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801325088"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:25,093 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-19 21:15:25,096 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-19 21:15:25,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 42 msec 2023-07-19 21:15:25,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-19 21:15:25,173 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-19 21:15:25,174 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-19 21:15:25,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-19 21:15:25,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 21:15:25,179 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801325179"}]},"ts":"1689801325179"} 2023-07-19 21:15:25,181 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-19 21:15:25,189 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-19 21:15:25,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, UNASSIGN}] 2023-07-19 21:15:25,193 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, UNASSIGN 2023-07-19 21:15:25,194 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:25,194 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801325194"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801325194"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801325194"}]},"ts":"1689801325194"} 2023-07-19 21:15:25,196 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,33539,1689801303815}] 2023-07-19 21:15:25,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 21:15:25,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:25,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6aadbd3b6a6d9947160d66ce2e3667b, disabling compactions & flushes 2023-07-19 21:15:25,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:25,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:25,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. after waiting 0 ms 2023-07-19 21:15:25,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:25,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:25,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b. 2023-07-19 21:15:25,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6aadbd3b6a6d9947160d66ce2e3667b: 2023-07-19 21:15:25,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:25,356 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=f6aadbd3b6a6d9947160d66ce2e3667b, regionState=CLOSED 2023-07-19 21:15:25,356 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689801325356"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801325356"}]},"ts":"1689801325356"} 2023-07-19 21:15:25,359 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-19 21:15:25,359 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure f6aadbd3b6a6d9947160d66ce2e3667b, server=jenkins-hbase4.apache.org,33539,1689801303815 in 161 msec 2023-07-19 21:15:25,360 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-19 21:15:25,360 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f6aadbd3b6a6d9947160d66ce2e3667b, UNASSIGN in 168 msec 2023-07-19 21:15:25,361 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801325361"}]},"ts":"1689801325361"} 2023-07-19 21:15:25,362 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-19 21:15:25,363 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-19 21:15:25,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 190 msec 2023-07-19 21:15:25,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-19 21:15:25,481 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-19 21:15:25,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-19 21:15:25,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,485 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1172547809' 2023-07-19 21:15:25,486 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:25,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,490 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:25,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,492 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits] 2023-07-19 21:15:25,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-19 21:15:25,498 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits/7.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b/recovered.edits/7.seqid 2023-07-19 21:15:25,498 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/GrouptestMultiTableMoveB/f6aadbd3b6a6d9947160d66ce2e3667b 2023-07-19 21:15:25,498 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-19 21:15:25,501 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,503 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-19 21:15:25,505 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-19 21:15:25,506 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,506 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-19 21:15:25,506 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801325506"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:25,507 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 21:15:25,507 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f6aadbd3b6a6d9947160d66ce2e3667b, NAME => 'GrouptestMultiTableMoveB,,1689801323079.f6aadbd3b6a6d9947160d66ce2e3667b.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 21:15:25,507 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-19 21:15:25,508 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801325508"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:25,509 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-19 21:15:25,512 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-19 21:15:25,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 30 msec 2023-07-19 21:15:25,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-19 21:15:25,595 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-19 21:15:25,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539] to rsgroup default 2023-07-19 21:15:25,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1172547809 2023-07-19 21:15:25,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1172547809, current retry=0 2023-07-19 21:15:25,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815] are moved back to Group_testMultiTableMove_1172547809 2023-07-19 21:15:25,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1172547809 => default 2023-07-19 21:15:25,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1172547809 2023-07-19 21:15:25,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:25,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:25,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:25,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:25,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,627 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:25,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:25,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:25,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:25,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 513 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802525648, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:25,649 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:25,651 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:25,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,652 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:25,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,675 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=522 (was 525), OpenFileDescriptor=822 (was 824), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=176 (was 176), AvailableMemoryMB=2480 (was 2772) 2023-07-19 21:15:25,675 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-19 21:15:25,693 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=522, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=176, AvailableMemoryMB=2480 2023-07-19 21:15:25,693 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-19 21:15:25,694 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-19 21:15:25,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:25,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:25,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,707 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:25,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:25,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:25,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:25,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 541 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802525722, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:25,723 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:25,724 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:25,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,725 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:25,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-19 21:15:25,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup oldGroup 2023-07-19 21:15:25,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to default 2023-07-19 21:15:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-19 21:15:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 21:15:25,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-19 21:15:25,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-19 21:15:25,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 21:15:25,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:25,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43325] to rsgroup anotherRSGroup 2023-07-19 21:15:25,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 21:15:25,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:25,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:25,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43325,1689801307487] are moved back to default 2023-07-19 21:15:25,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-19 21:15:25,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 21:15:25,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-19 21:15:25,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-19 21:15:25,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:33664 deadline: 1689802525781, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-19 21:15:25,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-19 21:15:25,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:33664 deadline: 1689802525784, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-19 21:15:25,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-19 21:15:25,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:33664 deadline: 1689802525785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-19 21:15:25,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-19 21:15:25,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 581 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:33664 deadline: 1689802525786, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-19 21:15:25,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43325] to rsgroup default 2023-07-19 21:15:25,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-19 21:15:25,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:25,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-19 21:15:25,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43325,1689801307487] are moved back to anotherRSGroup 2023-07-19 21:15:25,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-19 21:15:25,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-19 21:15:25,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 21:15:25,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup default 2023-07-19 21:15:25,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-19 21:15:25,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-19 21:15:25,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to oldGroup 2023-07-19 21:15:25,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-19 21:15:25,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-19 21:15:25,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:25,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:25,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:25,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:25,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,832 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:25,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:25,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:25,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:25,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 617 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802525843, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:25,844 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:25,845 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:25,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,846 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:25,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,867 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=526 (was 522) Potentially hanging thread: hconnection-0x63197ba-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=366 (was 366), ProcessCount=176 (was 176), AvailableMemoryMB=2478 (was 2480) 2023-07-19 21:15:25,867 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-19 21:15:25,886 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=526, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=366, ProcessCount=176, AvailableMemoryMB=2477 2023-07-19 21:15:25,886 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-19 21:15:25,886 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-19 21:15:25,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:25,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:25,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:25,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:25,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:25,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:25,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:25,904 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:25,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:25,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:25,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:25,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:25,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 645 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802525921, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:25,922 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:25,925 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,927 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:25,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:25,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-19 21:15:25,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:25,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:25,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup oldgroup 2023-07-19 21:15:25,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:25,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to default 2023-07-19 21:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-19 21:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:25,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:25,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:25,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 21:15:25,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:25,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:25,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-19 21:15:25,966 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:25,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-19 21:15:25,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 21:15:25,968 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:25,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:25,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:25,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:25,972 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:25,973 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:25,974 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/testRename/9006f5c913607b5238e3f3f5730241fd empty. 2023-07-19 21:15:25,975 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:25,975 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-19 21:15:25,995 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:25,999 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9006f5c913607b5238e3f3f5730241fd, NAME => 'testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:26,019 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:26,019 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 9006f5c913607b5238e3f3f5730241fd, disabling compactions & flushes 2023-07-19 21:15:26,019 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,019 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,019 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. after waiting 0 ms 2023-07-19 21:15:26,020 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,020 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,020 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:26,022 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:26,023 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326023"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801326023"}]},"ts":"1689801326023"} 2023-07-19 21:15:26,024 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:26,025 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:26,025 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801326025"}]},"ts":"1689801326025"} 2023-07-19 21:15:26,026 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-19 21:15:26,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:26,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:26,030 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:26,030 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:26,030 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, ASSIGN}] 2023-07-19 21:15:26,032 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, ASSIGN 2023-07-19 21:15:26,032 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:26,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 21:15:26,183 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:26,184 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:26,184 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801326184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801326184"}]},"ts":"1689801326184"} 2023-07-19 21:15:26,186 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:26,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 21:15:26,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9006f5c913607b5238e3f3f5730241fd, NAME => 'testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:26,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:26,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,344 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,345 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:26,345 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:26,345 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9006f5c913607b5238e3f3f5730241fd columnFamilyName tr 2023-07-19 21:15:26,346 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(310): Store=9006f5c913607b5238e3f3f5730241fd/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:26,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:26,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9006f5c913607b5238e3f3f5730241fd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11408107520, jitterRate=0.062462806701660156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:26,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:26,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd., pid=119, masterSystemTime=1689801326338 2023-07-19 21:15:26,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,355 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:26,355 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326355"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801326355"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801326355"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801326355"}]},"ts":"1689801326355"} 2023-07-19 21:15:26,357 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-19 21:15:26,357 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487 in 170 msec 2023-07-19 21:15:26,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-19 21:15:26,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, ASSIGN in 327 msec 2023-07-19 21:15:26,359 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:26,360 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801326359"}]},"ts":"1689801326359"} 2023-07-19 21:15:26,361 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-19 21:15:26,365 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:26,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 402 msec 2023-07-19 21:15:26,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-19 21:15:26,570 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-19 21:15:26,570 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-19 21:15:26,571 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:26,574 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-19 21:15:26,574 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:26,574 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-19 21:15:26,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-19 21:15:26,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:26,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:26,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:26,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:26,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-19 21:15:26,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 9006f5c913607b5238e3f3f5730241fd to RSGroup oldgroup 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:26,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE 2023-07-19 21:15:26,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-19 21:15:26,582 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE 2023-07-19 21:15:26,582 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:26,582 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326582"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801326582"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801326582"}]},"ts":"1689801326582"} 2023-07-19 21:15:26,584 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:26,702 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 21:15:26,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9006f5c913607b5238e3f3f5730241fd, disabling compactions & flushes 2023-07-19 21:15:26,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. after waiting 0 ms 2023-07-19 21:15:26,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:26,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:26,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:26,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9006f5c913607b5238e3f3f5730241fd move to jenkins-hbase4.apache.org,33985,1689801303414 record at close sequenceid=2 2023-07-19 21:15:26,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:26,749 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=CLOSED 2023-07-19 21:15:26,749 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326749"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801326749"}]},"ts":"1689801326749"} 2023-07-19 21:15:26,752 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-19 21:15:26,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487 in 167 msec 2023-07-19 21:15:26,753 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33985,1689801303414; forceNewPlan=false, retain=false 2023-07-19 21:15:26,903 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:26,904 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:26,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801326904"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801326904"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801326904"}]},"ts":"1689801326904"} 2023-07-19 21:15:26,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:27,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:27,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9006f5c913607b5238e3f3f5730241fd, NAME => 'testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:27,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:27,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,065 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,066 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:27,066 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:27,066 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9006f5c913607b5238e3f3f5730241fd columnFamilyName tr 2023-07-19 21:15:27,067 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(310): Store=9006f5c913607b5238e3f3f5730241fd/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:27,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:27,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9006f5c913607b5238e3f3f5730241fd; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9452970560, jitterRate=-0.11962351202964783}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:27,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:27,074 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd., pid=122, masterSystemTime=1689801327058 2023-07-19 21:15:27,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:27,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:27,076 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:27,076 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801327076"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801327076"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801327076"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801327076"}]},"ts":"1689801327076"} 2023-07-19 21:15:27,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-19 21:15:27,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,33985,1689801303414 in 172 msec 2023-07-19 21:15:27,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE in 499 msec 2023-07-19 21:15:27,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-19 21:15:27,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-19 21:15:27,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:27,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:27,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:27,588 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:27,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 21:15:27,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:27,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-19 21:15:27,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:27,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 21:15:27,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:27,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:27,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:27,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-19 21:15:27,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:27,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:27,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:27,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:27,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:27,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:27,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:27,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:27,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43325] to rsgroup normal 2023-07-19 21:15:27,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:27,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:27,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:27,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:27,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:27,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:27,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43325,1689801307487] are moved back to default 2023-07-19 21:15:27,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-19 21:15:27,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:27,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:27,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:27,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 21:15:27,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:27,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:27,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-19 21:15:27,625 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:27,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-19 21:15:27,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 21:15:27,627 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:27,627 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:27,628 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:27,628 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:27,629 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:27,637 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:27,638 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:27,639 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 empty. 2023-07-19 21:15:27,639 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:27,639 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-19 21:15:27,664 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:27,665 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9326a7c092cdb69a8ea6c6746e9c2bb5, NAME => 'unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 9326a7c092cdb69a8ea6c6746e9c2bb5, disabling compactions & flushes 2023-07-19 21:15:27,695 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. after waiting 0 ms 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:27,695 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:27,695 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:27,698 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:27,699 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801327699"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801327699"}]},"ts":"1689801327699"} 2023-07-19 21:15:27,700 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:27,701 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:27,701 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801327701"}]},"ts":"1689801327701"} 2023-07-19 21:15:27,703 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-19 21:15:27,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, ASSIGN}] 2023-07-19 21:15:27,720 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, ASSIGN 2023-07-19 21:15:27,721 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:27,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 21:15:27,873 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:27,873 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801327873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801327873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801327873"}]},"ts":"1689801327873"} 2023-07-19 21:15:27,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:27,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 21:15:28,030 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9326a7c092cdb69a8ea6c6746e9c2bb5, NAME => 'unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:28,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:28,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,032 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,033 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:28,034 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:28,034 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9326a7c092cdb69a8ea6c6746e9c2bb5 columnFamilyName ut 2023-07-19 21:15:28,035 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(310): Store=9326a7c092cdb69a8ea6c6746e9c2bb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:28,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:28,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9326a7c092cdb69a8ea6c6746e9c2bb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11522317440, jitterRate=0.07309943437576294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:28,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5., pid=125, masterSystemTime=1689801328026 2023-07-19 21:15:28,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,045 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:28,045 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801328045"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801328045"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801328045"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801328045"}]},"ts":"1689801328045"} 2023-07-19 21:15:28,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-19 21:15:28,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640 in 171 msec 2023-07-19 21:15:28,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-19 21:15:28,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, ASSIGN in 331 msec 2023-07-19 21:15:28,050 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:28,051 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801328050"}]},"ts":"1689801328050"} 2023-07-19 21:15:28,052 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-19 21:15:28,054 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:28,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 432 msec 2023-07-19 21:15:28,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-19 21:15:28,230 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-19 21:15:28,230 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-19 21:15:28,230 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:28,234 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-19 21:15:28,234 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:28,235 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-19 21:15:28,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-19 21:15:28,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-19 21:15:28,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:28,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:28,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:28,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:28,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-19 21:15:28,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 9326a7c092cdb69a8ea6c6746e9c2bb5 to RSGroup normal 2023-07-19 21:15:28,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE 2023-07-19 21:15:28,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-19 21:15:28,248 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE 2023-07-19 21:15:28,249 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:28,249 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801328249"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801328249"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801328249"}]},"ts":"1689801328249"} 2023-07-19 21:15:28,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:28,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9326a7c092cdb69a8ea6c6746e9c2bb5, disabling compactions & flushes 2023-07-19 21:15:28,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. after waiting 0 ms 2023-07-19 21:15:28,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:28,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:28,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9326a7c092cdb69a8ea6c6746e9c2bb5 move to jenkins-hbase4.apache.org,43325,1689801307487 record at close sequenceid=2 2023-07-19 21:15:28,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,412 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=CLOSED 2023-07-19 21:15:28,412 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801328412"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801328412"}]},"ts":"1689801328412"} 2023-07-19 21:15:28,414 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-19 21:15:28,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640 in 163 msec 2023-07-19 21:15:28,415 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:28,565 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:28,566 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801328565"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801328565"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801328565"}]},"ts":"1689801328565"} 2023-07-19 21:15:28,568 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:28,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9326a7c092cdb69a8ea6c6746e9c2bb5, NAME => 'unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:28,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:28,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,725 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,726 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:28,726 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:28,726 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9326a7c092cdb69a8ea6c6746e9c2bb5 columnFamilyName ut 2023-07-19 21:15:28,727 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(310): Store=9326a7c092cdb69a8ea6c6746e9c2bb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:28,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:28,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9326a7c092cdb69a8ea6c6746e9c2bb5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11269336160, jitterRate=0.04953871667385101}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:28,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:28,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5., pid=128, masterSystemTime=1689801328719 2023-07-19 21:15:28,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:28,736 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:28,736 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801328736"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801328736"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801328736"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801328736"}]},"ts":"1689801328736"} 2023-07-19 21:15:28,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-19 21:15:28,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,43325,1689801307487 in 169 msec 2023-07-19 21:15:28,740 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE in 496 msec 2023-07-19 21:15:29,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-19 21:15:29,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-19 21:15:29,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:29,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:29,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:29,255 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:29,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 21:15:29,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:29,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-19 21:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 21:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:29,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-19 21:15:29,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:29,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:29,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:29,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:29,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-19 21:15:29,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-19 21:15:29,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:29,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:29,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-19 21:15:29,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:29,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-19 21:15:29,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:29,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-19 21:15:29,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:29,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:29,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:29,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-19 21:15:29,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:29,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:29,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:29,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:29,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:29,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-19 21:15:29,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 9326a7c092cdb69a8ea6c6746e9c2bb5 to RSGroup default 2023-07-19 21:15:29,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE 2023-07-19 21:15:29,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 21:15:29,288 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE 2023-07-19 21:15:29,288 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:29,289 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801329288"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801329288"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801329288"}]},"ts":"1689801329288"} 2023-07-19 21:15:29,290 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:29,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9326a7c092cdb69a8ea6c6746e9c2bb5, disabling compactions & flushes 2023-07-19 21:15:29,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. after waiting 0 ms 2023-07-19 21:15:29,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:29,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:29,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9326a7c092cdb69a8ea6c6746e9c2bb5 move to jenkins-hbase4.apache.org,45225,1689801303640 record at close sequenceid=5 2023-07-19 21:15:29,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,454 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=CLOSED 2023-07-19 21:15:29,454 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801329454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801329454"}]},"ts":"1689801329454"} 2023-07-19 21:15:29,457 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-19 21:15:29,457 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,43325,1689801307487 in 165 msec 2023-07-19 21:15:29,457 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:29,608 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:29,608 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801329608"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801329608"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801329608"}]},"ts":"1689801329608"} 2023-07-19 21:15:29,610 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:29,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9326a7c092cdb69a8ea6c6746e9c2bb5, NAME => 'unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:29,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:29,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,769 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,770 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:29,770 DEBUG [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/ut 2023-07-19 21:15:29,770 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9326a7c092cdb69a8ea6c6746e9c2bb5 columnFamilyName ut 2023-07-19 21:15:29,771 INFO [StoreOpener-9326a7c092cdb69a8ea6c6746e9c2bb5-1] regionserver.HStore(310): Store=9326a7c092cdb69a8ea6c6746e9c2bb5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:29,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:29,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9326a7c092cdb69a8ea6c6746e9c2bb5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10726012640, jitterRate=-0.0010622292757034302}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:29,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:29,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5., pid=131, masterSystemTime=1689801329762 2023-07-19 21:15:29,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:29,780 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9326a7c092cdb69a8ea6c6746e9c2bb5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:29,780 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689801329780"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801329780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801329780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801329780"}]},"ts":"1689801329780"} 2023-07-19 21:15:29,782 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-19 21:15:29,782 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 9326a7c092cdb69a8ea6c6746e9c2bb5, server=jenkins-hbase4.apache.org,45225,1689801303640 in 171 msec 2023-07-19 21:15:29,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9326a7c092cdb69a8ea6c6746e9c2bb5, REOPEN/MOVE in 495 msec 2023-07-19 21:15:29,800 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-19 21:15:29,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-19 21:15:30,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-19 21:15:30,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-19 21:15:30,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:30,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43325] to rsgroup default 2023-07-19 21:15:30,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-19 21:15:30,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:30,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:30,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:30,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-19 21:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43325,1689801307487] are moved back to normal 2023-07-19 21:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-19 21:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:30,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-19 21:15:30,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:30,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:30,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:30,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 21:15:30,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:30,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:30,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:30,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:30,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:30,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:30,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:30,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:30,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:30,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:30,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:30,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-19 21:15:30,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:30,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:30,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:30,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-19 21:15:30,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(345): Moving region 9006f5c913607b5238e3f3f5730241fd to RSGroup default 2023-07-19 21:15:30,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE 2023-07-19 21:15:30,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-19 21:15:30,333 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE 2023-07-19 21:15:30,334 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:30,334 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801330334"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801330334"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801330334"}]},"ts":"1689801330334"} 2023-07-19 21:15:30,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,33985,1689801303414}] 2023-07-19 21:15:30,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9006f5c913607b5238e3f3f5730241fd, disabling compactions & flushes 2023-07-19 21:15:30,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. after waiting 0 ms 2023-07-19 21:15:30,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-19 21:15:30,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:30,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9006f5c913607b5238e3f3f5730241fd move to jenkins-hbase4.apache.org,43325,1689801307487 record at close sequenceid=5 2023-07-19 21:15:30,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,499 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=CLOSED 2023-07-19 21:15:30,499 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801330499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801330499"}]},"ts":"1689801330499"} 2023-07-19 21:15:30,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-19 21:15:30,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,33985,1689801303414 in 165 msec 2023-07-19 21:15:30,503 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:30,653 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:30,653 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:30,654 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801330653"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801330653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801330653"}]},"ts":"1689801330653"} 2023-07-19 21:15:30,655 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:30,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9006f5c913607b5238e3f3f5730241fd, NAME => 'testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:30,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:30,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,812 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,813 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:30,814 DEBUG [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/tr 2023-07-19 21:15:30,814 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9006f5c913607b5238e3f3f5730241fd columnFamilyName tr 2023-07-19 21:15:30,815 INFO [StoreOpener-9006f5c913607b5238e3f3f5730241fd-1] regionserver.HStore(310): Store=9006f5c913607b5238e3f3f5730241fd/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:30,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:30,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9006f5c913607b5238e3f3f5730241fd; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10024935840, jitterRate=-0.06635509431362152}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:30,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:30,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd., pid=134, masterSystemTime=1689801330807 2023-07-19 21:15:30,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,825 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:30,827 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9006f5c913607b5238e3f3f5730241fd, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:30,827 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689801330827"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801330827"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801330827"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801330827"}]},"ts":"1689801330827"} 2023-07-19 21:15:30,831 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-19 21:15:30,831 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 9006f5c913607b5238e3f3f5730241fd, server=jenkins-hbase4.apache.org,43325,1689801307487 in 174 msec 2023-07-19 21:15:30,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9006f5c913607b5238e3f3f5730241fd, REOPEN/MOVE in 499 msec 2023-07-19 21:15:31,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-19 21:15:31,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-19 21:15:31,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:31,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup default 2023-07-19 21:15:31,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-19 21:15:31,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:31,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-19 21:15:31,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to newgroup 2023-07-19 21:15:31,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-19 21:15:31,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:31,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-19 21:15:31,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:31,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:31,352 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:31,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:31,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:31,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:31,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:31,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 765 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802531362, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:31,363 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:31,365 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:31,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,366 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:31,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:31,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,384 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=517 (was 526), OpenFileDescriptor=810 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=353 (was 366), ProcessCount=174 (was 176), AvailableMemoryMB=4700 (was 2477) - AvailableMemoryMB LEAK? - 2023-07-19 21:15:31,384 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-19 21:15:31,401 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=517, OpenFileDescriptor=810, MaxFileDescriptor=60000, SystemLoadAverage=353, ProcessCount=174, AvailableMemoryMB=4699 2023-07-19 21:15:31,401 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-19 21:15:31,401 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-19 21:15:31,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:31,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:31,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:31,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:31,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:31,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:31,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:31,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:31,417 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:31,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:31,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:31,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:31,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:31,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 793 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802531430, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:31,431 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:31,433 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:31,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,434 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:31,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:31,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-19 21:15:31,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:31,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-19 21:15:31,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-19 21:15:31,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-19 21:15:31,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-19 21:15:31,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:33664 deadline: 1689802531446, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-19 21:15:31,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-19 21:15:31,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:33664 deadline: 1689802531450, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 21:15:31,452 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-19 21:15:31,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-19 21:15:31,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-19 21:15:31,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 812 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:33664 deadline: 1689802531459, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-19 21:15:31,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:31,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:31,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:31,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:31,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:31,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:31,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:31,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:31,479 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:31,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:31,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:31,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:31,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:31,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 836 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802531491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:31,507 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:31,509 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:31,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,510 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:31,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:31,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,529 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=521 (was 517) Potentially hanging thread: hconnection-0x63197ba-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a756921-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=810 (was 810), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=353 (was 353), ProcessCount=174 (was 174), AvailableMemoryMB=4698 (was 4699) 2023-07-19 21:15:31,529 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-19 21:15:31,548 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=521, OpenFileDescriptor=810, MaxFileDescriptor=60000, SystemLoadAverage=353, ProcessCount=174, AvailableMemoryMB=4698 2023-07-19 21:15:31,548 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-19 21:15:31,548 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-19 21:15:31,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:31,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:31,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:31,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:31,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:31,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:31,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:31,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:31,562 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:31,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:31,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:31,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:31,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:31,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:31,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 864 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802531580, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:31,580 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:31,582 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:31,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,583 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:31,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:31,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:31,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:31,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:31,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:31,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-19 21:15:31,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to default 2023-07-19 21:15:31,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:31,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:31,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:31,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:31,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:31,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:31,609 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:31,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-19 21:15:31,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 21:15:31,611 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:31,611 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:31,611 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:31,612 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:31,613 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa empty. 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 empty. 2023-07-19 21:15:31,618 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e empty. 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 empty. 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 empty. 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:31,619 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:31,619 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 21:15:31,631 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:31,632 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 14b8b41fd4c563398041f625125c5ffa, NAME => 'Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:31,633 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 58b927a7f03bb7c7389a698724e96b65, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:31,633 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 171402f2be3b53976f59ba125afabab2, NAME => 'Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 14b8b41fd4c563398041f625125c5ffa, disabling compactions & flushes 2023-07-19 21:15:31,651 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. after waiting 0 ms 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:31,651 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:31,651 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 14b8b41fd4c563398041f625125c5ffa: 2023-07-19 21:15:31,652 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 45ff046e50f9ce9ba41b7332283db344, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 58b927a7f03bb7c7389a698724e96b65, disabling compactions & flushes 2023-07-19 21:15:31,655 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. after waiting 0 ms 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:31,655 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:31,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 58b927a7f03bb7c7389a698724e96b65: 2023-07-19 21:15:31,655 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9c87b6b4746ce3d9ca45c2573f3eca4e, NAME => 'Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 171402f2be3b53976f59ba125afabab2, disabling compactions & flushes 2023-07-19 21:15:31,660 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. after waiting 0 ms 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:31,660 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:31,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 171402f2be3b53976f59ba125afabab2: 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 45ff046e50f9ce9ba41b7332283db344, disabling compactions & flushes 2023-07-19 21:15:31,671 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. after waiting 0 ms 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:31,671 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 45ff046e50f9ce9ba41b7332283db344: 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 9c87b6b4746ce3d9ca45c2573f3eca4e, disabling compactions & flushes 2023-07-19 21:15:31,671 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:31,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. after waiting 0 ms 2023-07-19 21:15:31,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:31,672 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:31,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 9c87b6b4746ce3d9ca45c2573f3eca4e: 2023-07-19 21:15:31,675 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:31,676 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801331676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801331676"}]},"ts":"1689801331676"} 2023-07-19 21:15:31,676 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801331676"}]},"ts":"1689801331676"} 2023-07-19 21:15:31,676 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801331676"}]},"ts":"1689801331676"} 2023-07-19 21:15:31,676 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801331676"}]},"ts":"1689801331676"} 2023-07-19 21:15:31,676 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801331676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801331676"}]},"ts":"1689801331676"} 2023-07-19 21:15:31,678 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-19 21:15:31,679 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:31,679 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801331679"}]},"ts":"1689801331679"} 2023-07-19 21:15:31,681 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-19 21:15:31,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:31,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:31,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:31,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:31,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, ASSIGN}] 2023-07-19 21:15:31,687 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, ASSIGN 2023-07-19 21:15:31,687 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, ASSIGN 2023-07-19 21:15:31,687 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, ASSIGN 2023-07-19 21:15:31,687 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, ASSIGN 2023-07-19 21:15:31,688 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:31,688 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, ASSIGN 2023-07-19 21:15:31,688 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:31,688 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43325,1689801307487; forceNewPlan=false, retain=false 2023-07-19 21:15:31,688 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:31,689 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45225,1689801303640; forceNewPlan=false, retain=false 2023-07-19 21:15:31,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 21:15:31,838 INFO [jenkins-hbase4:36267] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-19 21:15:31,842 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=45ff046e50f9ce9ba41b7332283db344, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:31,842 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=58b927a7f03bb7c7389a698724e96b65, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:31,842 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801331842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801331842"}]},"ts":"1689801331842"} 2023-07-19 21:15:31,842 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801331842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801331842"}]},"ts":"1689801331842"} 2023-07-19 21:15:31,842 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=9c87b6b4746ce3d9ca45c2573f3eca4e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:31,842 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=171402f2be3b53976f59ba125afabab2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:31,842 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=14b8b41fd4c563398041f625125c5ffa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:31,843 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801331842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801331842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801331842"}]},"ts":"1689801331842"} 2023-07-19 21:15:31,843 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801331842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801331842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801331842"}]},"ts":"1689801331842"} 2023-07-19 21:15:31,843 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801331842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801331842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801331842"}]},"ts":"1689801331842"} 2023-07-19 21:15:31,844 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=139, state=RUNNABLE; OpenRegionProcedure 45ff046e50f9ce9ba41b7332283db344, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:31,846 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-19 21:15:31,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=138, state=RUNNABLE; OpenRegionProcedure 58b927a7f03bb7c7389a698724e96b65, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:31,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=137, state=RUNNABLE; OpenRegionProcedure 171402f2be3b53976f59ba125afabab2, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:31,848 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=136, state=RUNNABLE; OpenRegionProcedure 14b8b41fd4c563398041f625125c5ffa, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:31,851 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; OpenRegionProcedure 9c87b6b4746ce3d9ca45c2573f3eca4e, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:31,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 21:15:31,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:31,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 45ff046e50f9ce9ba41b7332283db344, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-19 21:15:32,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:32,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,001 INFO [StoreOpener-45ff046e50f9ce9ba41b7332283db344-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,003 DEBUG [StoreOpener-45ff046e50f9ce9ba41b7332283db344-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/f 2023-07-19 21:15:32,003 DEBUG [StoreOpener-45ff046e50f9ce9ba41b7332283db344-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/f 2023-07-19 21:15:32,003 INFO [StoreOpener-45ff046e50f9ce9ba41b7332283db344-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 45ff046e50f9ce9ba41b7332283db344 columnFamilyName f 2023-07-19 21:15:32,004 INFO [StoreOpener-45ff046e50f9ce9ba41b7332283db344-1] regionserver.HStore(310): Store=45ff046e50f9ce9ba41b7332283db344/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:32,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9c87b6b4746ce3d9ca45c2573f3eca4e, NAME => 'Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-19 21:15:32,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:32,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,009 INFO [StoreOpener-9c87b6b4746ce3d9ca45c2573f3eca4e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,010 DEBUG [StoreOpener-9c87b6b4746ce3d9ca45c2573f3eca4e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/f 2023-07-19 21:15:32,011 DEBUG [StoreOpener-9c87b6b4746ce3d9ca45c2573f3eca4e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/f 2023-07-19 21:15:32,011 INFO [StoreOpener-9c87b6b4746ce3d9ca45c2573f3eca4e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9c87b6b4746ce3d9ca45c2573f3eca4e columnFamilyName f 2023-07-19 21:15:32,012 INFO [StoreOpener-9c87b6b4746ce3d9ca45c2573f3eca4e-1] regionserver.HStore(310): Store=9c87b6b4746ce3d9ca45c2573f3eca4e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:32,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:32,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 45ff046e50f9ce9ba41b7332283db344; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11395927040, jitterRate=0.06132841110229492}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:32,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 45ff046e50f9ce9ba41b7332283db344: 2023-07-19 21:15:32,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344., pid=141, masterSystemTime=1689801331995 2023-07-19 21:15:32,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58b927a7f03bb7c7389a698724e96b65, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-19 21:15:32,020 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=45ff046e50f9ce9ba41b7332283db344, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:32,020 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332019"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801332019"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801332019"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801332019"}]},"ts":"1689801332019"} 2023-07-19 21:15:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=139 2023-07-19 21:15:32,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=139, state=SUCCESS; OpenRegionProcedure 45ff046e50f9ce9ba41b7332283db344, server=jenkins-hbase4.apache.org,43325,1689801307487 in 177 msec 2023-07-19 21:15:32,024 INFO [StoreOpener-58b927a7f03bb7c7389a698724e96b65-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:32,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9c87b6b4746ce3d9ca45c2573f3eca4e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9846990880, jitterRate=-0.08292751014232635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:32,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, ASSIGN in 340 msec 2023-07-19 21:15:32,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9c87b6b4746ce3d9ca45c2573f3eca4e: 2023-07-19 21:15:32,026 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e., pid=145, masterSystemTime=1689801332003 2023-07-19 21:15:32,027 DEBUG [StoreOpener-58b927a7f03bb7c7389a698724e96b65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/f 2023-07-19 21:15:32,027 DEBUG [StoreOpener-58b927a7f03bb7c7389a698724e96b65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/f 2023-07-19 21:15:32,027 INFO [StoreOpener-58b927a7f03bb7c7389a698724e96b65-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58b927a7f03bb7c7389a698724e96b65 columnFamilyName f 2023-07-19 21:15:32,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,028 INFO [StoreOpener-58b927a7f03bb7c7389a698724e96b65-1] regionserver.HStore(310): Store=58b927a7f03bb7c7389a698724e96b65/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:32,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,028 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=9c87b6b4746ce3d9ca45c2573f3eca4e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 14b8b41fd4c563398041f625125c5ffa, NAME => 'Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-19 21:15:32,028 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332028"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801332028"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801332028"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801332028"}]},"ts":"1689801332028"} 2023-07-19 21:15:32,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:32,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,030 INFO [StoreOpener-14b8b41fd4c563398041f625125c5ffa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,032 DEBUG [StoreOpener-14b8b41fd4c563398041f625125c5ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/f 2023-07-19 21:15:32,032 DEBUG [StoreOpener-14b8b41fd4c563398041f625125c5ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/f 2023-07-19 21:15:32,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-19 21:15:32,033 INFO [StoreOpener-14b8b41fd4c563398041f625125c5ffa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 14b8b41fd4c563398041f625125c5ffa columnFamilyName f 2023-07-19 21:15:32,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; OpenRegionProcedure 9c87b6b4746ce3d9ca45c2573f3eca4e, server=jenkins-hbase4.apache.org,45225,1689801303640 in 182 msec 2023-07-19 21:15:32,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,034 INFO [StoreOpener-14b8b41fd4c563398041f625125c5ffa-1] regionserver.HStore(310): Store=14b8b41fd4c563398041f625125c5ffa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:32,034 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, ASSIGN in 349 msec 2023-07-19 21:15:32,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:32,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58b927a7f03bb7c7389a698724e96b65; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9685848160, jitterRate=-0.09793509542942047}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:32,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58b927a7f03bb7c7389a698724e96b65: 2023-07-19 21:15:32,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:32,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 14b8b41fd4c563398041f625125c5ffa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11143966080, jitterRate=0.03786271810531616}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:32,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 14b8b41fd4c563398041f625125c5ffa: 2023-07-19 21:15:32,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa., pid=144, masterSystemTime=1689801332003 2023-07-19 21:15:32,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65., pid=142, masterSystemTime=1689801331995 2023-07-19 21:15:32,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 171402f2be3b53976f59ba125afabab2, NAME => 'Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-19 21:15:32,045 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=14b8b41fd4c563398041f625125c5ffa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:32,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,046 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332045"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801332045"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801332045"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801332045"}]},"ts":"1689801332045"} 2023-07-19 21:15:32,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,047 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=58b927a7f03bb7c7389a698724e96b65, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:32,047 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332047"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801332047"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801332047"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801332047"}]},"ts":"1689801332047"} 2023-07-19 21:15:32,050 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=136 2023-07-19 21:15:32,050 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=136, state=SUCCESS; OpenRegionProcedure 14b8b41fd4c563398041f625125c5ffa, server=jenkins-hbase4.apache.org,45225,1689801303640 in 200 msec 2023-07-19 21:15:32,051 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=138 2023-07-19 21:15:32,051 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=138, state=SUCCESS; OpenRegionProcedure 58b927a7f03bb7c7389a698724e96b65, server=jenkins-hbase4.apache.org,43325,1689801307487 in 203 msec 2023-07-19 21:15:32,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, ASSIGN in 366 msec 2023-07-19 21:15:32,052 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, ASSIGN in 367 msec 2023-07-19 21:15:32,057 INFO [StoreOpener-171402f2be3b53976f59ba125afabab2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,059 DEBUG [StoreOpener-171402f2be3b53976f59ba125afabab2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/f 2023-07-19 21:15:32,059 DEBUG [StoreOpener-171402f2be3b53976f59ba125afabab2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/f 2023-07-19 21:15:32,060 INFO [StoreOpener-171402f2be3b53976f59ba125afabab2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 171402f2be3b53976f59ba125afabab2 columnFamilyName f 2023-07-19 21:15:32,060 INFO [StoreOpener-171402f2be3b53976f59ba125afabab2-1] regionserver.HStore(310): Store=171402f2be3b53976f59ba125afabab2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:32,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:32,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 171402f2be3b53976f59ba125afabab2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11870611520, jitterRate=0.10553684830665588}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:32,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 171402f2be3b53976f59ba125afabab2: 2023-07-19 21:15:32,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2., pid=143, masterSystemTime=1689801332003 2023-07-19 21:15:32,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,070 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=171402f2be3b53976f59ba125afabab2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,070 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332069"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801332069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801332069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801332069"}]},"ts":"1689801332069"} 2023-07-19 21:15:32,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=137 2023-07-19 21:15:32,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=137, state=SUCCESS; OpenRegionProcedure 171402f2be3b53976f59ba125afabab2, server=jenkins-hbase4.apache.org,45225,1689801303640 in 224 msec 2023-07-19 21:15:32,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=135 2023-07-19 21:15:32,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, ASSIGN in 389 msec 2023-07-19 21:15:32,075 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:32,075 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801332075"}]},"ts":"1689801332075"} 2023-07-19 21:15:32,076 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-19 21:15:32,079 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:32,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 473 msec 2023-07-19 21:15:32,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-19 21:15:32,214 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-19 21:15:32,214 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-19 21:15:32,214 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:32,218 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-19 21:15:32,219 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:32,219 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-19 21:15:32,219 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:32,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 21:15:32,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:32,226 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 21:15:32,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 21:15:32,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 21:15:32,231 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801332231"}]},"ts":"1689801332231"} 2023-07-19 21:15:32,232 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-19 21:15:32,234 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-19 21:15:32,235 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, UNASSIGN}] 2023-07-19 21:15:32,237 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, UNASSIGN 2023-07-19 21:15:32,237 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, UNASSIGN 2023-07-19 21:15:32,237 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, UNASSIGN 2023-07-19 21:15:32,237 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, UNASSIGN 2023-07-19 21:15:32,237 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, UNASSIGN 2023-07-19 21:15:32,238 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=58b927a7f03bb7c7389a698724e96b65, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:32,238 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801332238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801332238"}]},"ts":"1689801332238"} 2023-07-19 21:15:32,238 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=14b8b41fd4c563398041f625125c5ffa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,238 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=171402f2be3b53976f59ba125afabab2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,238 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801332238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801332238"}]},"ts":"1689801332238"} 2023-07-19 21:15:32,238 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=45ff046e50f9ce9ba41b7332283db344, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:32,238 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=9c87b6b4746ce3d9ca45c2573f3eca4e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,238 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801332238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801332238"}]},"ts":"1689801332238"} 2023-07-19 21:15:32,238 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801332238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801332238"}]},"ts":"1689801332238"} 2023-07-19 21:15:32,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=149, state=RUNNABLE; CloseRegionProcedure 58b927a7f03bb7c7389a698724e96b65, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:32,238 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801332238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801332238"}]},"ts":"1689801332238"} 2023-07-19 21:15:32,240 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure 14b8b41fd4c563398041f625125c5ffa, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:32,241 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=150, state=RUNNABLE; CloseRegionProcedure 45ff046e50f9ce9ba41b7332283db344, server=jenkins-hbase4.apache.org,43325,1689801307487}] 2023-07-19 21:15:32,241 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=151, state=RUNNABLE; CloseRegionProcedure 9c87b6b4746ce3d9ca45c2573f3eca4e, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:32,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=148, state=RUNNABLE; CloseRegionProcedure 171402f2be3b53976f59ba125afabab2, server=jenkins-hbase4.apache.org,45225,1689801303640}] 2023-07-19 21:15:32,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 21:15:32,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 45ff046e50f9ce9ba41b7332283db344, disabling compactions & flushes 2023-07-19 21:15:32,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. after waiting 0 ms 2023-07-19 21:15:32,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 14b8b41fd4c563398041f625125c5ffa, disabling compactions & flushes 2023-07-19 21:15:32,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. after waiting 0 ms 2023-07-19 21:15:32,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:32,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:32,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa. 2023-07-19 21:15:32,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344. 2023-07-19 21:15:32,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 14b8b41fd4c563398041f625125c5ffa: 2023-07-19 21:15:32,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 45ff046e50f9ce9ba41b7332283db344: 2023-07-19 21:15:32,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58b927a7f03bb7c7389a698724e96b65, disabling compactions & flushes 2023-07-19 21:15:32,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. after waiting 0 ms 2023-07-19 21:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,402 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=45ff046e50f9ce9ba41b7332283db344, regionState=CLOSED 2023-07-19 21:15:32,402 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801332402"}]},"ts":"1689801332402"} 2023-07-19 21:15:32,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9c87b6b4746ce3d9ca45c2573f3eca4e, disabling compactions & flushes 2023-07-19 21:15:32,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. after waiting 0 ms 2023-07-19 21:15:32,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,405 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=14b8b41fd4c563398041f625125c5ffa, regionState=CLOSED 2023-07-19 21:15:32,406 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801332405"}]},"ts":"1689801332405"} 2023-07-19 21:15:32,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=150 2023-07-19 21:15:32,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=150, state=SUCCESS; CloseRegionProcedure 45ff046e50f9ce9ba41b7332283db344, server=jenkins-hbase4.apache.org,43325,1689801307487 in 162 msec 2023-07-19 21:15:32,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:32,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65. 2023-07-19 21:15:32,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58b927a7f03bb7c7389a698724e96b65: 2023-07-19 21:15:32,408 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=45ff046e50f9ce9ba41b7332283db344, UNASSIGN in 172 msec 2023-07-19 21:15:32,409 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-19 21:15:32,409 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure 14b8b41fd4c563398041f625125c5ffa, server=jenkins-hbase4.apache.org,45225,1689801303640 in 167 msec 2023-07-19 21:15:32,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:32,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e. 2023-07-19 21:15:32,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9c87b6b4746ce3d9ca45c2573f3eca4e: 2023-07-19 21:15:32,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=58b927a7f03bb7c7389a698724e96b65, regionState=CLOSED 2023-07-19 21:15:32,410 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=14b8b41fd4c563398041f625125c5ffa, UNASSIGN in 174 msec 2023-07-19 21:15:32,410 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801332410"}]},"ts":"1689801332410"} 2023-07-19 21:15:32,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 171402f2be3b53976f59ba125afabab2, disabling compactions & flushes 2023-07-19 21:15:32,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,413 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=9c87b6b4746ce3d9ca45c2573f3eca4e, regionState=CLOSED 2023-07-19 21:15:32,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. after waiting 0 ms 2023-07-19 21:15:32,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,413 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689801332413"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801332413"}]},"ts":"1689801332413"} 2023-07-19 21:15:32,415 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=149 2023-07-19 21:15:32,415 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=149, state=SUCCESS; CloseRegionProcedure 58b927a7f03bb7c7389a698724e96b65, server=jenkins-hbase4.apache.org,43325,1689801307487 in 172 msec 2023-07-19 21:15:32,416 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58b927a7f03bb7c7389a698724e96b65, UNASSIGN in 180 msec 2023-07-19 21:15:32,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=151 2023-07-19 21:15:32,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=151, state=SUCCESS; CloseRegionProcedure 9c87b6b4746ce3d9ca45c2573f3eca4e, server=jenkins-hbase4.apache.org,45225,1689801303640 in 173 msec 2023-07-19 21:15:32,418 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c87b6b4746ce3d9ca45c2573f3eca4e, UNASSIGN in 181 msec 2023-07-19 21:15:32,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:32,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2. 2023-07-19 21:15:32,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 171402f2be3b53976f59ba125afabab2: 2023-07-19 21:15:32,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,422 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=171402f2be3b53976f59ba125afabab2, regionState=CLOSED 2023-07-19 21:15:32,422 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689801332422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801332422"}]},"ts":"1689801332422"} 2023-07-19 21:15:32,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=148 2023-07-19 21:15:32,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=148, state=SUCCESS; CloseRegionProcedure 171402f2be3b53976f59ba125afabab2, server=jenkins-hbase4.apache.org,45225,1689801303640 in 181 msec 2023-07-19 21:15:32,428 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=146 2023-07-19 21:15:32,428 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=171402f2be3b53976f59ba125afabab2, UNASSIGN in 190 msec 2023-07-19 21:15:32,429 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801332429"}]},"ts":"1689801332429"} 2023-07-19 21:15:32,430 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-19 21:15:32,436 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-19 21:15:32,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 210 msec 2023-07-19 21:15:32,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-19 21:15:32,533 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-19 21:15:32,533 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:32,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:32,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-19 21:15:32,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1549840512, current retry=0 2023-07-19 21:15:32,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1549840512. 2023-07-19 21:15:32,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:32,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-19 21:15:32,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:32,552 INFO [Listener at localhost/39507] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-19 21:15:32,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-19 21:15:32,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:32,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 924 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:33664 deadline: 1689801392553, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-19 21:15:32,554 DEBUG [Listener at localhost/39507] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-19 21:15:32,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-19 21:15:32,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,558 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1549840512' 2023-07-19 21:15:32,559 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:32,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:32,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-19 21:15:32,566 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,566 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,566 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,566 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,566 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,569 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/recovered.edits] 2023-07-19 21:15:32,569 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/recovered.edits] 2023-07-19 21:15:32,569 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/recovered.edits] 2023-07-19 21:15:32,570 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/recovered.edits] 2023-07-19 21:15:32,570 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/f, FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/recovered.edits] 2023-07-19 21:15:32,579 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344/recovered.edits/4.seqid 2023-07-19 21:15:32,580 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa/recovered.edits/4.seqid 2023-07-19 21:15:32,581 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/45ff046e50f9ce9ba41b7332283db344 2023-07-19 21:15:32,581 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65/recovered.edits/4.seqid 2023-07-19 21:15:32,581 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e/recovered.edits/4.seqid 2023-07-19 21:15:32,581 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/14b8b41fd4c563398041f625125c5ffa 2023-07-19 21:15:32,582 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/58b927a7f03bb7c7389a698724e96b65 2023-07-19 21:15:32,582 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/9c87b6b4746ce3d9ca45c2573f3eca4e 2023-07-19 21:15:32,582 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/recovered.edits/4.seqid to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/archive/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2/recovered.edits/4.seqid 2023-07-19 21:15:32,583 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/.tmp/data/default/Group_testDisabledTableMove/171402f2be3b53976f59ba125afabab2 2023-07-19 21:15:32,583 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-19 21:15:32,588 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,590 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-19 21:15:32,596 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-19 21:15:32,597 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,597 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-19 21:15:32,597 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801332597"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,598 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801332597"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,598 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801332597"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,598 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801332597"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,598 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801332597"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,600 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-19 21:15:32,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 14b8b41fd4c563398041f625125c5ffa, NAME => 'Group_testDisabledTableMove,,1689801331605.14b8b41fd4c563398041f625125c5ffa.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 171402f2be3b53976f59ba125afabab2, NAME => 'Group_testDisabledTableMove,aaaaa,1689801331605.171402f2be3b53976f59ba125afabab2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 58b927a7f03bb7c7389a698724e96b65, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689801331605.58b927a7f03bb7c7389a698724e96b65.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 45ff046e50f9ce9ba41b7332283db344, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689801331605.45ff046e50f9ce9ba41b7332283db344.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9c87b6b4746ce3d9ca45c2573f3eca4e, NAME => 'Group_testDisabledTableMove,zzzzz,1689801331605.9c87b6b4746ce3d9ca45c2573f3eca4e.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-19 21:15:32,600 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-19 21:15:32,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801332600"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:32,601 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-19 21:15:32,603 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-19 21:15:32,605 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 49 msec 2023-07-19 21:15:32,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-19 21:15:32,665 INFO [Listener at localhost/39507] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-19 21:15:32,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:32,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:32,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:32,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:32,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:32,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:32,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:32,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:32,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985] to rsgroup default 2023-07-19 21:15:32,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1549840512, current retry=0 2023-07-19 21:15:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33539,1689801303815, jenkins-hbase4.apache.org,33985,1689801303414] are moved back to Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1549840512 => default 2023-07-19 21:15:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:32,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1549840512 2023-07-19 21:15:32,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:32,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:32,694 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:32,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:32,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:32,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:32,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:32,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:32,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:32,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 958 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802532711, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:32,712 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:32,714 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:32,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,715 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:32,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:32,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:32,735 INFO [Listener at localhost/39507] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=525 (was 521) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972617606_17 at /127.0.0.1:40634 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2231fec8-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2042935688_17 at /127.0.0.1:56656 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=827 (was 810) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=332 (was 353), ProcessCount=174 (was 174), AvailableMemoryMB=4704 (was 4698) - AvailableMemoryMB LEAK? - 2023-07-19 21:15:32,735 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-19 21:15:32,754 INFO [Listener at localhost/39507] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=525, OpenFileDescriptor=827, MaxFileDescriptor=60000, SystemLoadAverage=332, ProcessCount=174, AvailableMemoryMB=4703 2023-07-19 21:15:32,754 WARN [Listener at localhost/39507] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-19 21:15:32,754 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-19 21:15:32,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:32,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:32,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:32,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:32,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:32,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:32,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:32,772 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:32,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:32,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:32,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:32,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:32,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:32,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36267] to rsgroup master 2023-07-19 21:15:32,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:32,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] ipc.CallRunner(144): callId: 986 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33664 deadline: 1689802532782, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. 2023-07-19 21:15:32,783 WARN [Listener at localhost/39507] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36267 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:32,785 INFO [Listener at localhost/39507] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:32,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:32,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:32,786 INFO [Listener at localhost/39507] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33539, jenkins-hbase4.apache.org:33985, jenkins-hbase4.apache.org:43325, jenkins-hbase4.apache.org:45225], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:32,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:32,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36267] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:32,787 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 21:15:32,787 INFO [Listener at localhost/39507] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 21:15:32,787 DEBUG [Listener at localhost/39507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65b017d0 to 127.0.0.1:58627 2023-07-19 21:15:32,787 DEBUG [Listener at localhost/39507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,788 DEBUG [Listener at localhost/39507] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 21:15:32,788 DEBUG [Listener at localhost/39507] util.JVMClusterUtil(257): Found active master hash=1154234514, stopped=false 2023-07-19 21:15:32,789 DEBUG [Listener at localhost/39507] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 21:15:32,789 DEBUG [Listener at localhost/39507] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 21:15:32,789 INFO [Listener at localhost/39507] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:32,791 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:32,791 INFO [Listener at localhost/39507] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 21:15:32,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:32,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:32,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:32,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:32,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:32,792 DEBUG [Listener at localhost/39507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a5d5391 to 127.0.0.1:58627 2023-07-19 21:15:32,793 DEBUG [Listener at localhost/39507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,793 INFO [Listener at localhost/39507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33985,1689801303414' ***** 2023-07-19 21:15:32,793 INFO [Listener at localhost/39507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45225,1689801303640' ***** 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:32,794 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:32,794 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33539,1689801303815' ***** 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43325,1689801307487' ***** 2023-07-19 21:15:32,794 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:32,794 INFO [Listener at localhost/39507] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:32,795 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-19 21:15:32,799 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:32,801 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-19 21:15:32,815 INFO [RS:3;jenkins-hbase4:43325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3cbddc65{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:32,815 INFO [RS:0;jenkins-hbase4:33985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@75640050{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:32,815 INFO [RS:1;jenkins-hbase4:45225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7eecefb7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:32,815 INFO [RS:2;jenkins-hbase4:33539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@60f62ff2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:32,820 INFO [RS:3;jenkins-hbase4:43325] server.AbstractConnector(383): Stopped ServerConnector@3a94b5f4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:32,820 INFO [RS:1;jenkins-hbase4:45225] server.AbstractConnector(383): Stopped ServerConnector@7394d09{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:32,820 INFO [RS:1;jenkins-hbase4:45225] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:32,820 INFO [RS:0;jenkins-hbase4:33985] server.AbstractConnector(383): Stopped ServerConnector@4aa1e459{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:32,821 INFO [RS:1;jenkins-hbase4:45225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34301e2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:32,820 INFO [RS:2;jenkins-hbase4:33539] server.AbstractConnector(383): Stopped ServerConnector@447a00c4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:32,821 INFO [RS:0;jenkins-hbase4:33985] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:32,820 INFO [RS:3;jenkins-hbase4:43325] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:32,822 INFO [RS:2;jenkins-hbase4:33539] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:32,822 INFO [RS:1;jenkins-hbase4:45225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e84c820{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:32,824 INFO [RS:3;jenkins-hbase4:43325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38634af7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:32,823 INFO [RS:0;jenkins-hbase4:33985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@180451a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:32,825 INFO [RS:3;jenkins-hbase4:43325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6508fc97{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:32,824 INFO [RS:2;jenkins-hbase4:33539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@22b75a27{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:32,826 INFO [RS:0;jenkins-hbase4:33985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b56872c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:32,827 INFO [RS:2;jenkins-hbase4:33539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b0e15fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:32,829 INFO [RS:2;jenkins-hbase4:33539] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:32,829 INFO [RS:3;jenkins-hbase4:43325] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:32,829 INFO [RS:3;jenkins-hbase4:43325] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:32,829 INFO [RS:3;jenkins-hbase4:43325] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:32,829 INFO [RS:1;jenkins-hbase4:45225] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:32,829 INFO [RS:0;jenkins-hbase4:33985] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:32,830 INFO [RS:1;jenkins-hbase4:45225] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:32,830 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:32,829 INFO [RS:2;jenkins-hbase4:33539] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:32,830 INFO [RS:0;jenkins-hbase4:33985] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:32,829 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:32,830 INFO [RS:0;jenkins-hbase4:33985] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:32,830 INFO [RS:2;jenkins-hbase4:33539] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:32,830 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:32,830 INFO [RS:1;jenkins-hbase4:45225] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:32,830 DEBUG [RS:2;jenkins-hbase4:33539] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e372e56 to 127.0.0.1:58627 2023-07-19 21:15:32,830 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(3305): Received CLOSE for 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:32,830 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:32,829 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(3305): Received CLOSE for 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:32,829 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:32,830 DEBUG [RS:2;jenkins-hbase4:33539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,831 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33539,1689801303815; all regions closed. 2023-07-19 21:15:32,831 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(3305): Received CLOSE for 4579bff74bc250630a8bf94138cfbe06 2023-07-19 21:15:32,831 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:32,831 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(3305): Received CLOSE for 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:32,831 DEBUG [RS:3;jenkins-hbase4:43325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2376f5c7 to 127.0.0.1:58627 2023-07-19 21:15:32,831 DEBUG [RS:3;jenkins-hbase4:43325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,830 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:32,831 DEBUG [RS:0;jenkins-hbase4:33985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x276cf5d2 to 127.0.0.1:58627 2023-07-19 21:15:32,831 DEBUG [RS:0;jenkins-hbase4:33985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,831 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33985,1689801303414; all regions closed. 2023-07-19 21:15:32,831 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:32,831 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 21:15:32,832 DEBUG [RS:1;jenkins-hbase4:45225] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x19ec330d to 127.0.0.1:58627 2023-07-19 21:15:32,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9006f5c913607b5238e3f3f5730241fd, disabling compactions & flushes 2023-07-19 21:15:32,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9326a7c092cdb69a8ea6c6746e9c2bb5, disabling compactions & flushes 2023-07-19 21:15:32,832 DEBUG [RS:1;jenkins-hbase4:45225] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:32,832 INFO [RS:1;jenkins-hbase4:45225] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:32,832 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1478): Online Regions={9006f5c913607b5238e3f3f5730241fd=testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd.} 2023-07-19 21:15:32,833 INFO [RS:1;jenkins-hbase4:45225] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:32,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:32,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. after waiting 0 ms 2023-07-19 21:15:32,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:32,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:32,833 INFO [RS:1;jenkins-hbase4:45225] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:32,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:32,833 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 21:15:32,833 DEBUG [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1504): Waiting on 9006f5c913607b5238e3f3f5730241fd 2023-07-19 21:15:32,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. after waiting 0 ms 2023-07-19 21:15:32,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:32,834 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-19 21:15:32,834 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 9326a7c092cdb69a8ea6c6746e9c2bb5=unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5., 4579bff74bc250630a8bf94138cfbe06=hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06., 1934a6e0c77f024959d2c8636ae430b9=hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9.} 2023-07-19 21:15:32,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:32,834 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1504): Waiting on 1588230740, 1934a6e0c77f024959d2c8636ae430b9, 4579bff74bc250630a8bf94138cfbe06, 9326a7c092cdb69a8ea6c6746e9c2bb5 2023-07-19 21:15:32,834 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:32,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:32,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:32,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:32,834 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-19 21:15:32,840 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,840 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,842 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,843 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/testRename/9006f5c913607b5238e3f3f5730241fd/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 21:15:32,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:32,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9006f5c913607b5238e3f3f5730241fd: 2023-07-19 21:15:32,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689801325963.9006f5c913607b5238e3f3f5730241fd. 2023-07-19 21:15:32,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/default/unmovedTable/9326a7c092cdb69a8ea6c6746e9c2bb5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-19 21:15:32,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:32,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9326a7c092cdb69a8ea6c6746e9c2bb5: 2023-07-19 21:15:32,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689801327621.9326a7c092cdb69a8ea6c6746e9c2bb5. 2023-07-19 21:15:32,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4579bff74bc250630a8bf94138cfbe06, disabling compactions & flushes 2023-07-19 21:15:32,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:32,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:32,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. after waiting 0 ms 2023-07-19 21:15:32,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:32,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4579bff74bc250630a8bf94138cfbe06 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-19 21:15:32,861 DEBUG [RS:0;jenkins-hbase4:33985] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:32,861 INFO [RS:0;jenkins-hbase4:33985] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33985%2C1689801303414:(num 1689801305941) 2023-07-19 21:15:32,861 DEBUG [RS:0;jenkins-hbase4:33985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,861 INFO [RS:0;jenkins-hbase4:33985] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,862 INFO [RS:0;jenkins-hbase4:33985] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:32,863 INFO [RS:0;jenkins-hbase4:33985] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:32,863 INFO [RS:0;jenkins-hbase4:33985] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:32,863 INFO [RS:0;jenkins-hbase4:33985] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:32,863 DEBUG [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:32,864 INFO [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33539%2C1689801303815.meta:.meta(num 1689801306096) 2023-07-19 21:15:32,863 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:32,864 INFO [RS:0;jenkins-hbase4:33985] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33985 2023-07-19 21:15:32,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/83992eabda5c48c684fb232264074641 2023-07-19 21:15:32,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83992eabda5c48c684fb232264074641 2023-07-19 21:15:32,904 DEBUG [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:32,904 INFO [RS:2;jenkins-hbase4:33539] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33539%2C1689801303815:(num 1689801305945) 2023-07-19 21:15:32,904 DEBUG [RS:2;jenkins-hbase4:33539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:32,904 INFO [RS:2;jenkins-hbase4:33539] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:32,911 INFO [RS:2;jenkins-hbase4:33539] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:32,912 INFO [RS:2;jenkins-hbase4:33539] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:32,912 INFO [RS:2;jenkins-hbase4:33539] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:32,912 INFO [RS:2;jenkins-hbase4:33539] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:32,912 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:32,913 INFO [RS:2;jenkins-hbase4:33539] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33539 2023-07-19 21:15:32,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/.tmp/info/82cf32d9942b45e2beeced1bd1c35144 2023-07-19 21:15:32,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/.tmp/info/82cf32d9942b45e2beeced1bd1c35144 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/info/82cf32d9942b45e2beeced1bd1c35144 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33985,1689801303414 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:32,937 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33539,1689801303815 2023-07-19 21:15:32,938 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33539,1689801303815] 2023-07-19 21:15:32,938 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33539,1689801303815; numProcessing=1 2023-07-19 21:15:32,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/info/82cf32d9942b45e2beeced1bd1c35144, entries=2, sequenceid=6, filesize=4.8 K 2023-07-19 21:15:32,940 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33539,1689801303815 already deleted, retry=false 2023-07-19 21:15:32,940 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33539,1689801303815 expired; onlineServers=3 2023-07-19 21:15:32,940 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33985,1689801303414] 2023-07-19 21:15:32,940 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33985,1689801303414; numProcessing=2 2023-07-19 21:15:32,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 4579bff74bc250630a8bf94138cfbe06 in 81ms, sequenceid=6, compaction requested=false 2023-07-19 21:15:32,944 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33985,1689801303414 already deleted, retry=false 2023-07-19 21:15:32,944 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33985,1689801303414 expired; onlineServers=2 2023-07-19 21:15:32,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/namespace/4579bff74bc250630a8bf94138cfbe06/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-19 21:15:32,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4579bff74bc250630a8bf94138cfbe06: 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689801306321.4579bff74bc250630a8bf94138cfbe06. 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1934a6e0c77f024959d2c8636ae430b9, disabling compactions & flushes 2023-07-19 21:15:32,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. after waiting 0 ms 2023-07-19 21:15:32,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:32,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1934a6e0c77f024959d2c8636ae430b9 1/1 column families, dataSize=22.13 KB heapSize=36.50 KB 2023-07-19 21:15:33,034 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43325,1689801307487; all regions closed. 2023-07-19 21:15:33,034 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1504): Waiting on 1588230740, 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:33,041 DEBUG [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:33,041 INFO [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43325%2C1689801307487.meta:.meta(num 1689801308630) 2023-07-19 21:15:33,046 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,046 INFO [RS:0;jenkins-hbase4:33985] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33985,1689801303414; zookeeper connection closed. 2023-07-19 21:15:33,046 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f701e770001, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,046 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@628bb8bb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@628bb8bb 2023-07-19 21:15:33,047 DEBUG [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:33,047 INFO [RS:3;jenkins-hbase4:43325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43325%2C1689801307487:(num 1689801307869) 2023-07-19 21:15:33,047 DEBUG [RS:3;jenkins-hbase4:43325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:33,047 INFO [RS:3;jenkins-hbase4:43325] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:33,047 INFO [RS:3;jenkins-hbase4:43325] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:33,047 INFO [RS:3;jenkins-hbase4:43325] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:33,048 INFO [RS:3;jenkins-hbase4:43325] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:33,048 INFO [RS:3;jenkins-hbase4:43325] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:33,048 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:33,048 INFO [RS:3;jenkins-hbase4:43325] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43325 2023-07-19 21:15:33,146 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,146 INFO [RS:2;jenkins-hbase4:33539] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33539,1689801303815; zookeeper connection closed. 2023-07-19 21:15:33,146 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:33539-0x1017f701e770003, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,146 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6efdadf2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6efdadf2 2023-07-19 21:15:33,147 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:33,147 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:33,147 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43325,1689801307487 2023-07-19 21:15:33,149 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43325,1689801307487] 2023-07-19 21:15:33,149 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43325,1689801307487; numProcessing=3 2023-07-19 21:15:33,151 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43325,1689801307487 already deleted, retry=false 2023-07-19 21:15:33,151 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43325,1689801307487 expired; onlineServers=1 2023-07-19 21:15:33,235 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1504): Waiting on 1588230740, 1934a6e0c77f024959d2c8636ae430b9 2023-07-19 21:15:33,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/rep_barrier/feb5383cbecc46f7b511f27cb1e897a7 2023-07-19 21:15:33,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for feb5383cbecc46f7b511f27cb1e897a7 2023-07-19 21:15:33,374 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/b24ae73f5ce5454a8946cc05076bccbc 2023-07-19 21:15:33,381 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b24ae73f5ce5454a8946cc05076bccbc 2023-07-19 21:15:33,382 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/info/83992eabda5c48c684fb232264074641 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/83992eabda5c48c684fb232264074641 2023-07-19 21:15:33,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.13 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/71f787a1647e48ca9d318468ee387f8e 2023-07-19 21:15:33,391 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83992eabda5c48c684fb232264074641 2023-07-19 21:15:33,391 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/info/83992eabda5c48c684fb232264074641, entries=62, sequenceid=216, filesize=11.8 K 2023-07-19 21:15:33,392 INFO [RS:3;jenkins-hbase4:43325] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43325,1689801307487; zookeeper connection closed. 2023-07-19 21:15:33,392 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,392 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:43325-0x1017f701e77000b, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,393 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/rep_barrier/feb5383cbecc46f7b511f27cb1e897a7 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier/feb5383cbecc46f7b511f27cb1e897a7 2023-07-19 21:15:33,393 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@137d81be] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@137d81be 2023-07-19 21:15:33,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 71f787a1647e48ca9d318468ee387f8e 2023-07-19 21:15:33,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/.tmp/m/71f787a1647e48ca9d318468ee387f8e as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/71f787a1647e48ca9d318468ee387f8e 2023-07-19 21:15:33,402 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for feb5383cbecc46f7b511f27cb1e897a7 2023-07-19 21:15:33,402 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/rep_barrier/feb5383cbecc46f7b511f27cb1e897a7, entries=8, sequenceid=216, filesize=5.8 K 2023-07-19 21:15:33,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 71f787a1647e48ca9d318468ee387f8e 2023-07-19 21:15:33,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/m/71f787a1647e48ca9d318468ee387f8e, entries=22, sequenceid=107, filesize=5.9 K 2023-07-19 21:15:33,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.13 KB/22659, heapSize ~36.48 KB/37360, currentSize=0 B/0 for 1934a6e0c77f024959d2c8636ae430b9 in 437ms, sequenceid=107, compaction requested=true 2023-07-19 21:15:33,404 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/.tmp/table/b24ae73f5ce5454a8946cc05076bccbc as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/b24ae73f5ce5454a8946cc05076bccbc 2023-07-19 21:15:33,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/rsgroup/1934a6e0c77f024959d2c8636ae430b9/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-19 21:15:33,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:33,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:33,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1934a6e0c77f024959d2c8636ae430b9: 2023-07-19 21:15:33,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689801306536.1934a6e0c77f024959d2c8636ae430b9. 2023-07-19 21:15:33,420 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b24ae73f5ce5454a8946cc05076bccbc 2023-07-19 21:15:33,420 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/table/b24ae73f5ce5454a8946cc05076bccbc, entries=16, sequenceid=216, filesize=6.0 K 2023-07-19 21:15:33,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 587ms, sequenceid=216, compaction requested=true 2023-07-19 21:15:33,421 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 21:15:33,435 DEBUG [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 21:15:33,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/data/hbase/meta/1588230740/recovered.edits/219.seqid, newMaxSeqId=219, maxSeqId=104 2023-07-19 21:15:33,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:33,443 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:33,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:33,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:33,635 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45225,1689801303640; all regions closed. 2023-07-19 21:15:33,642 DEBUG [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:33,642 INFO [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45225%2C1689801303640.meta:.meta(num 1689801317506) 2023-07-19 21:15:33,647 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/WALs/jenkins-hbase4.apache.org,45225,1689801303640/jenkins-hbase4.apache.org%2C45225%2C1689801303640.1689801305939 not finished, retry = 0 2023-07-19 21:15:33,750 DEBUG [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/oldWALs 2023-07-19 21:15:33,750 INFO [RS:1;jenkins-hbase4:45225] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45225%2C1689801303640:(num 1689801305939) 2023-07-19 21:15:33,750 DEBUG [RS:1;jenkins-hbase4:45225] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:33,750 INFO [RS:1;jenkins-hbase4:45225] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:33,750 INFO [RS:1;jenkins-hbase4:45225] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:33,751 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:33,752 INFO [RS:1;jenkins-hbase4:45225] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45225 2023-07-19 21:15:33,754 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45225,1689801303640 2023-07-19 21:15:33,754 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:33,755 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45225,1689801303640] 2023-07-19 21:15:33,755 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45225,1689801303640; numProcessing=4 2023-07-19 21:15:33,757 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45225,1689801303640 already deleted, retry=false 2023-07-19 21:15:33,757 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45225,1689801303640 expired; onlineServers=0 2023-07-19 21:15:33,757 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36267,1689801301454' ***** 2023-07-19 21:15:33,757 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 21:15:33,758 DEBUG [M:0;jenkins-hbase4:36267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f132a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:33,758 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:33,760 INFO [M:0;jenkins-hbase4:36267] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@47177c10{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:33,760 INFO [M:0;jenkins-hbase4:36267] server.AbstractConnector(383): Stopped ServerConnector@cbd2559{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:33,760 INFO [M:0;jenkins-hbase4:36267] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:33,761 INFO [M:0;jenkins-hbase4:36267] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1a079e3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:33,761 INFO [M:0;jenkins-hbase4:36267] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2db17b81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:33,761 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:33,761 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:33,762 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36267,1689801301454 2023-07-19 21:15:33,762 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36267,1689801301454; all regions closed. 2023-07-19 21:15:33,762 DEBUG [M:0;jenkins-hbase4:36267] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:33,762 INFO [M:0;jenkins-hbase4:36267] master.HMaster(1491): Stopping master jetty server 2023-07-19 21:15:33,762 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:33,763 INFO [M:0;jenkins-hbase4:36267] server.AbstractConnector(383): Stopped ServerConnector@626aec64{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:33,763 DEBUG [M:0;jenkins-hbase4:36267] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 21:15:33,763 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 21:15:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801305406] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801305406,5,FailOnTimeoutGroup] 2023-07-19 21:15:33,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801305406] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801305406,5,FailOnTimeoutGroup] 2023-07-19 21:15:33,763 DEBUG [M:0;jenkins-hbase4:36267] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 21:15:33,764 INFO [M:0;jenkins-hbase4:36267] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 21:15:33,764 INFO [M:0;jenkins-hbase4:36267] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 21:15:33,764 INFO [M:0;jenkins-hbase4:36267] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 21:15:33,764 DEBUG [M:0;jenkins-hbase4:36267] master.HMaster(1512): Stopping service threads 2023-07-19 21:15:33,764 INFO [M:0;jenkins-hbase4:36267] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 21:15:33,764 ERROR [M:0;jenkins-hbase4:36267] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-19 21:15:33,765 INFO [M:0;jenkins-hbase4:36267] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 21:15:33,765 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 21:15:33,765 DEBUG [M:0;jenkins-hbase4:36267] zookeeper.ZKUtil(398): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 21:15:33,765 WARN [M:0;jenkins-hbase4:36267] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 21:15:33,765 INFO [M:0;jenkins-hbase4:36267] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 21:15:33,765 INFO [M:0;jenkins-hbase4:36267] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 21:15:33,765 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:33,766 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:33,766 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:33,766 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:33,766 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:33,766 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.62 KB heapSize=632.82 KB 2023-07-19 21:15:33,781 INFO [M:0;jenkins-hbase4:36267] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.62 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2c617febb3674448aa7a2970575a89e7 2023-07-19 21:15:33,786 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2c617febb3674448aa7a2970575a89e7 as hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2c617febb3674448aa7a2970575a89e7 2023-07-19 21:15:33,792 INFO [M:0;jenkins-hbase4:36267] regionserver.HStore(1080): Added hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2c617febb3674448aa7a2970575a89e7, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-19 21:15:33,793 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegion(2948): Finished flush of dataSize ~528.62 KB/541311, heapSize ~632.80 KB/647992, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=1176, compaction requested=false 2023-07-19 21:15:33,795 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:33,795 DEBUG [M:0;jenkins-hbase4:36267] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:33,799 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:33,799 INFO [M:0;jenkins-hbase4:36267] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 21:15:33,800 INFO [M:0;jenkins-hbase4:36267] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36267 2023-07-19 21:15:33,801 DEBUG [M:0;jenkins-hbase4:36267] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36267,1689801301454 already deleted, retry=false 2023-07-19 21:15:33,855 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,855 INFO [RS:1;jenkins-hbase4:45225] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45225,1689801303640; zookeeper connection closed. 2023-07-19 21:15:33,856 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): regionserver:45225-0x1017f701e770002, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,856 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@75b3342] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@75b3342 2023-07-19 21:15:33,856 INFO [Listener at localhost/39507] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 21:15:33,956 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,956 INFO [M:0;jenkins-hbase4:36267] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36267,1689801301454; zookeeper connection closed. 2023-07-19 21:15:33,956 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): master:36267-0x1017f701e770000, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:33,958 WARN [Listener at localhost/39507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:33,964 INFO [Listener at localhost/39507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:34,069 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:34,069 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1020996147-172.31.14.131-1689801298089 (Datanode Uuid 675a1942-e574-4708-8086-d20f31149659) service to localhost/127.0.0.1:40615 2023-07-19 21:15:34,070 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data5/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,071 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data6/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,074 WARN [Listener at localhost/39507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:34,080 INFO [Listener at localhost/39507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:34,185 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:34,185 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1020996147-172.31.14.131-1689801298089 (Datanode Uuid 3bf1676a-6788-4615-a0cc-a155abb7b2b2) service to localhost/127.0.0.1:40615 2023-07-19 21:15:34,186 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data3/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,187 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data4/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,188 WARN [Listener at localhost/39507] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:34,196 INFO [Listener at localhost/39507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:34,300 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:34,300 WARN [BP-1020996147-172.31.14.131-1689801298089 heartbeating to localhost/127.0.0.1:40615] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1020996147-172.31.14.131-1689801298089 (Datanode Uuid 3ea9533e-7ee4-447a-994e-c604c2effdda) service to localhost/127.0.0.1:40615 2023-07-19 21:15:34,301 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data1/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,301 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/cluster_6b7b306f-8129-bddb-ac61-f0f594f0f520/dfs/data/data2/current/BP-1020996147-172.31.14.131-1689801298089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:34,339 INFO [Listener at localhost/39507] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:34,469 INFO [Listener at localhost/39507] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.log.dir so I do NOT create it in target/test-data/747a6116-57eb-2930-7c95-805ca570230b 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0c899890-90fb-25ce-1021-f97094c81452/hadoop.tmp.dir so I do NOT create it in target/test-data/747a6116-57eb-2930-7c95-805ca570230b 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47, deleteOnExit=true 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 21:15:34,540 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/test.cache.data in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 21:15:34,541 DEBUG [Listener at localhost/39507] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:15:34,541 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:15:34,542 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:15:34,543 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 21:15:34,543 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/nfs.dump.dir in system properties and HBase conf 2023-07-19 21:15:34,543 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir in system properties and HBase conf 2023-07-19 21:15:34,543 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:15:34,543 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 21:15:34,544 INFO [Listener at localhost/39507] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 21:15:34,552 WARN [Listener at localhost/39507] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:15:34,552 WARN [Listener at localhost/39507] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:15:34,565 DEBUG [Listener at localhost/39507-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017f701e77000a, quorum=127.0.0.1:58627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 21:15:34,565 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017f701e77000a, quorum=127.0.0.1:58627, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 21:15:34,663 WARN [Listener at localhost/39507] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:34,667 INFO [Listener at localhost/39507] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:34,672 INFO [Listener at localhost/39507] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/Jetty_localhost_43311_hdfs____.b24rm/webapp 2023-07-19 21:15:34,777 INFO [Listener at localhost/39507] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43311 2023-07-19 21:15:34,781 WARN [Listener at localhost/39507] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:15:34,782 WARN [Listener at localhost/39507] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:15:34,839 WARN [Listener at localhost/45035] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:34,855 WARN [Listener at localhost/45035] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:34,857 WARN [Listener at localhost/45035] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:34,859 INFO [Listener at localhost/45035] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:34,865 INFO [Listener at localhost/45035] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/Jetty_localhost_43997_datanode____vyajvj/webapp 2023-07-19 21:15:34,959 INFO [Listener at localhost/45035] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43997 2023-07-19 21:15:34,967 WARN [Listener at localhost/35281] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:34,986 WARN [Listener at localhost/35281] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:34,989 WARN [Listener at localhost/35281] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:34,990 INFO [Listener at localhost/35281] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:34,992 INFO [Listener at localhost/35281] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/Jetty_localhost_39957_datanode____.7n76lg/webapp 2023-07-19 21:15:35,101 INFO [Listener at localhost/35281] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39957 2023-07-19 21:15:35,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e409bf43a48fb4c: Processing first storage report for DS-c4a2f71a-d32a-440d-b1e9-df697582d61c from datanode 022266d3-1e36-4865-89e3-f3f1c51d763c 2023-07-19 21:15:35,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e409bf43a48fb4c: from storage DS-c4a2f71a-d32a-440d-b1e9-df697582d61c node DatanodeRegistration(127.0.0.1:40675, datanodeUuid=022266d3-1e36-4865-89e3-f3f1c51d763c, infoPort=37111, infoSecurePort=0, ipcPort=35281, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e409bf43a48fb4c: Processing first storage report for DS-46ab55ff-b2e7-461d-b221-6dd07ef5ff2b from datanode 022266d3-1e36-4865-89e3-f3f1c51d763c 2023-07-19 21:15:35,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e409bf43a48fb4c: from storage DS-46ab55ff-b2e7-461d-b221-6dd07ef5ff2b node DatanodeRegistration(127.0.0.1:40675, datanodeUuid=022266d3-1e36-4865-89e3-f3f1c51d763c, infoPort=37111, infoSecurePort=0, ipcPort=35281, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,114 WARN [Listener at localhost/41503] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:35,138 WARN [Listener at localhost/41503] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:35,140 WARN [Listener at localhost/41503] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:35,141 INFO [Listener at localhost/41503] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:35,145 INFO [Listener at localhost/41503] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/Jetty_localhost_33395_datanode____bqtf9y/webapp 2023-07-19 21:15:35,250 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcda774e290de1b07: Processing first storage report for DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9 from datanode 9e22528d-2c35-4aaa-b67e-37ac6e2f08f1 2023-07-19 21:15:35,250 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcda774e290de1b07: from storage DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9 node DatanodeRegistration(127.0.0.1:43177, datanodeUuid=9e22528d-2c35-4aaa-b67e-37ac6e2f08f1, infoPort=44523, infoSecurePort=0, ipcPort=41503, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,250 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcda774e290de1b07: Processing first storage report for DS-7b239539-93f5-4d81-9069-7940f0bb7a52 from datanode 9e22528d-2c35-4aaa-b67e-37ac6e2f08f1 2023-07-19 21:15:35,250 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcda774e290de1b07: from storage DS-7b239539-93f5-4d81-9069-7940f0bb7a52 node DatanodeRegistration(127.0.0.1:43177, datanodeUuid=9e22528d-2c35-4aaa-b67e-37ac6e2f08f1, infoPort=44523, infoSecurePort=0, ipcPort=41503, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,271 INFO [Listener at localhost/41503] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33395 2023-07-19 21:15:35,281 WARN [Listener at localhost/37503] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:35,398 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe3abe2cc0e8c77ee: Processing first storage report for DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347 from datanode 4a32452a-5ca5-4d73-a0dc-ec921fb74b00 2023-07-19 21:15:35,398 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe3abe2cc0e8c77ee: from storage DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347 node DatanodeRegistration(127.0.0.1:42443, datanodeUuid=4a32452a-5ca5-4d73-a0dc-ec921fb74b00, infoPort=45007, infoSecurePort=0, ipcPort=37503, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,398 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe3abe2cc0e8c77ee: Processing first storage report for DS-e08ca09a-0056-4a3a-b5b9-b88c38ad7ef3 from datanode 4a32452a-5ca5-4d73-a0dc-ec921fb74b00 2023-07-19 21:15:35,398 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe3abe2cc0e8c77ee: from storage DS-e08ca09a-0056-4a3a-b5b9-b88c38ad7ef3 node DatanodeRegistration(127.0.0.1:42443, datanodeUuid=4a32452a-5ca5-4d73-a0dc-ec921fb74b00, infoPort=45007, infoSecurePort=0, ipcPort=37503, storageInfo=lv=-57;cid=testClusterID;nsid=785629647;c=1689801334555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:35,496 DEBUG [Listener at localhost/37503] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b 2023-07-19 21:15:35,499 INFO [Listener at localhost/37503] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/zookeeper_0, clientPort=51495, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 21:15:35,500 INFO [Listener at localhost/37503] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51495 2023-07-19 21:15:35,501 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,501 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,519 INFO [Listener at localhost/37503] util.FSUtils(471): Created version file at hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455 with version=8 2023-07-19 21:15:35,519 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/hbase-staging 2023-07-19 21:15:35,521 DEBUG [Listener at localhost/37503] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 21:15:35,521 DEBUG [Listener at localhost/37503] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 21:15:35,521 DEBUG [Listener at localhost/37503] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 21:15:35,521 DEBUG [Listener at localhost/37503] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 21:15:35,522 INFO [Listener at localhost/37503] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:35,522 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,522 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,523 INFO [Listener at localhost/37503] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:35,523 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,523 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:35,523 INFO [Listener at localhost/37503] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:35,524 INFO [Listener at localhost/37503] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45995 2023-07-19 21:15:35,525 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,526 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,527 INFO [Listener at localhost/37503] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45995 connecting to ZooKeeper ensemble=127.0.0.1:51495 2023-07-19 21:15:35,535 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:459950x0, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:35,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45995-0x1017f70a5cd0000 connected 2023-07-19 21:15:35,549 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:35,550 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:35,550 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:35,555 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45995 2023-07-19 21:15:35,555 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45995 2023-07-19 21:15:35,555 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45995 2023-07-19 21:15:35,556 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45995 2023-07-19 21:15:35,557 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45995 2023-07-19 21:15:35,559 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:35,559 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:35,560 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:35,560 INFO [Listener at localhost/37503] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 21:15:35,560 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:35,560 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:35,560 INFO [Listener at localhost/37503] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:35,561 INFO [Listener at localhost/37503] http.HttpServer(1146): Jetty bound to port 39249 2023-07-19 21:15:35,561 INFO [Listener at localhost/37503] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:35,564 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,565 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b4d4f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:35,565 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,566 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d3661ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:35,681 INFO [Listener at localhost/37503] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:35,682 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:35,682 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:35,682 INFO [Listener at localhost/37503] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:35,683 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,685 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@362229ca{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/jetty-0_0_0_0-39249-hbase-server-2_4_18-SNAPSHOT_jar-_-any-74016762540647918/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:35,686 INFO [Listener at localhost/37503] server.AbstractConnector(333): Started ServerConnector@1670d5cc{HTTP/1.1, (http/1.1)}{0.0.0.0:39249} 2023-07-19 21:15:35,686 INFO [Listener at localhost/37503] server.Server(415): Started @39849ms 2023-07-19 21:15:35,686 INFO [Listener at localhost/37503] master.HMaster(444): hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455, hbase.cluster.distributed=false 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:35,701 INFO [Listener at localhost/37503] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:35,702 INFO [Listener at localhost/37503] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44931 2023-07-19 21:15:35,703 INFO [Listener at localhost/37503] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:35,704 DEBUG [Listener at localhost/37503] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:35,704 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,706 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,707 INFO [Listener at localhost/37503] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44931 connecting to ZooKeeper ensemble=127.0.0.1:51495 2023-07-19 21:15:35,711 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:449310x0, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:35,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44931-0x1017f70a5cd0001 connected 2023-07-19 21:15:35,712 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:35,713 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:35,713 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:35,714 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44931 2023-07-19 21:15:35,714 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44931 2023-07-19 21:15:35,714 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44931 2023-07-19 21:15:35,717 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44931 2023-07-19 21:15:35,718 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44931 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:35,720 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:35,721 INFO [Listener at localhost/37503] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:35,722 INFO [Listener at localhost/37503] http.HttpServer(1146): Jetty bound to port 44191 2023-07-19 21:15:35,722 INFO [Listener at localhost/37503] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:35,727 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,727 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50d22f03{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:35,727 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,727 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3388f574{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:35,845 INFO [Listener at localhost/37503] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:35,846 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:35,846 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:35,847 INFO [Listener at localhost/37503] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:35,848 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,849 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67054ef5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/jetty-0_0_0_0-44191-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3541864178813239351/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:35,850 INFO [Listener at localhost/37503] server.AbstractConnector(333): Started ServerConnector@151e3f23{HTTP/1.1, (http/1.1)}{0.0.0.0:44191} 2023-07-19 21:15:35,851 INFO [Listener at localhost/37503] server.Server(415): Started @40013ms 2023-07-19 21:15:35,868 INFO [Listener at localhost/37503] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:35,869 INFO [Listener at localhost/37503] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:35,870 INFO [Listener at localhost/37503] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41399 2023-07-19 21:15:35,871 INFO [Listener at localhost/37503] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:35,872 DEBUG [Listener at localhost/37503] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:35,872 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,873 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:35,874 INFO [Listener at localhost/37503] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41399 connecting to ZooKeeper ensemble=127.0.0.1:51495 2023-07-19 21:15:35,878 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:413990x0, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:35,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41399-0x1017f70a5cd0002 connected 2023-07-19 21:15:35,880 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:35,880 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:35,880 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:35,881 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41399 2023-07-19 21:15:35,881 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41399 2023-07-19 21:15:35,882 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41399 2023-07-19 21:15:35,884 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41399 2023-07-19 21:15:35,884 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41399 2023-07-19 21:15:35,886 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:35,886 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:35,886 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:35,886 INFO [Listener at localhost/37503] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:35,887 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:35,887 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:35,887 INFO [Listener at localhost/37503] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:35,887 INFO [Listener at localhost/37503] http.HttpServer(1146): Jetty bound to port 33981 2023-07-19 21:15:35,888 INFO [Listener at localhost/37503] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:35,894 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,894 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@cd78568{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:35,894 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:35,894 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c19355a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:36,012 INFO [Listener at localhost/37503] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:36,013 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:36,014 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:36,014 INFO [Listener at localhost/37503] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:36,015 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:36,016 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@440c4fd4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/jetty-0_0_0_0-33981-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5142431177481400916/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:36,017 INFO [Listener at localhost/37503] server.AbstractConnector(333): Started ServerConnector@2b381da4{HTTP/1.1, (http/1.1)}{0.0.0.0:33981} 2023-07-19 21:15:36,018 INFO [Listener at localhost/37503] server.Server(415): Started @40180ms 2023-07-19 21:15:36,029 INFO [Listener at localhost/37503] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:36,029 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:36,030 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:36,030 INFO [Listener at localhost/37503] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:36,030 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:36,030 INFO [Listener at localhost/37503] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:36,030 INFO [Listener at localhost/37503] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:36,031 INFO [Listener at localhost/37503] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45745 2023-07-19 21:15:36,031 INFO [Listener at localhost/37503] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:36,032 DEBUG [Listener at localhost/37503] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:36,033 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:36,034 INFO [Listener at localhost/37503] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:36,035 INFO [Listener at localhost/37503] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45745 connecting to ZooKeeper ensemble=127.0.0.1:51495 2023-07-19 21:15:36,040 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:457450x0, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:36,041 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45745-0x1017f70a5cd0003 connected 2023-07-19 21:15:36,041 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:36,041 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:36,042 DEBUG [Listener at localhost/37503] zookeeper.ZKUtil(164): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:36,042 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45745 2023-07-19 21:15:36,043 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45745 2023-07-19 21:15:36,043 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45745 2023-07-19 21:15:36,044 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45745 2023-07-19 21:15:36,046 DEBUG [Listener at localhost/37503] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45745 2023-07-19 21:15:36,048 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:36,048 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:36,049 INFO [Listener at localhost/37503] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:36,049 INFO [Listener at localhost/37503] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:36,049 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:36,049 INFO [Listener at localhost/37503] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:36,049 INFO [Listener at localhost/37503] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:36,050 INFO [Listener at localhost/37503] http.HttpServer(1146): Jetty bound to port 38757 2023-07-19 21:15:36,050 INFO [Listener at localhost/37503] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:36,055 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:36,055 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50d1f456{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:36,056 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:36,056 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37ae8bbb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:36,174 INFO [Listener at localhost/37503] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:36,175 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:36,175 INFO [Listener at localhost/37503] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:36,175 INFO [Listener at localhost/37503] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:36,176 INFO [Listener at localhost/37503] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:36,177 INFO [Listener at localhost/37503] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@76fdc157{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/java.io.tmpdir/jetty-0_0_0_0-38757-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7092795602140040180/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:36,179 INFO [Listener at localhost/37503] server.AbstractConnector(333): Started ServerConnector@3d692739{HTTP/1.1, (http/1.1)}{0.0.0.0:38757} 2023-07-19 21:15:36,179 INFO [Listener at localhost/37503] server.Server(415): Started @40342ms 2023-07-19 21:15:36,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:36,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7143040{HTTP/1.1, (http/1.1)}{0.0.0.0:46855} 2023-07-19 21:15:36,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @40349ms 2023-07-19 21:15:36,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,187 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:36,188 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,190 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:36,190 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,190 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:36,190 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:36,190 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:36,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:36,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45995,1689801335521 from backup master directory 2023-07-19 21:15:36,194 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:36,195 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,195 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:36,195 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:36,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/hbase.id with ID: e738c531-40af-4a45-87ec-21eb8282ea3a 2023-07-19 21:15:36,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:36,232 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x160047a1 to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:36,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60a4e18, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:36,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:36,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 21:15:36,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:36,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store-tmp 2023-07-19 21:15:36,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:36,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:36,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:36,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:36,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:36,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:36,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:36,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:36,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/WALs/jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45995%2C1689801335521, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/WALs/jenkins-hbase4.apache.org,45995,1689801335521, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/oldWALs, maxLogs=10 2023-07-19 21:15:36,303 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK] 2023-07-19 21:15:36,304 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK] 2023-07-19 21:15:36,304 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK] 2023-07-19 21:15:36,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/WALs/jenkins-hbase4.apache.org,45995,1689801335521/jenkins-hbase4.apache.org%2C45995%2C1689801335521.1689801336278 2023-07-19 21:15:36,314 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK], DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK], DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK]] 2023-07-19 21:15:36,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:36,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:36,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,317 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,319 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 21:15:36,319 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 21:15:36,320 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:36,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:36,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9770887360, jitterRate=-0.09001520276069641}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:36,328 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:36,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 21:15:36,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 21:15:36,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 21:15:36,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 21:15:36,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 21:15:36,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 21:15:36,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 21:15:36,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 21:15:36,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 21:15:36,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 21:15:36,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 21:15:36,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 21:15:36,337 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 21:15:36,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 21:15:36,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 21:15:36,340 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:36,340 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:36,340 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:36,340 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:36,340 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45995,1689801335521, sessionid=0x1017f70a5cd0000, setting cluster-up flag (Was=false) 2023-07-19 21:15:36,349 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 21:15:36,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,357 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 21:15:36,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:36,363 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.hbase-snapshot/.tmp 2023-07-19 21:15:36,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 21:15:36,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 21:15:36,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 21:15:36,366 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 21:15:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-19 21:15:36,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:36,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:36,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:36,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:36,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:36,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:36,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:36,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:36,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:36,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 21:15:36,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:36,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,381 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(951): ClusterId : e738c531-40af-4a45-87ec-21eb8282ea3a 2023-07-19 21:15:36,381 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(951): ClusterId : e738c531-40af-4a45-87ec-21eb8282ea3a 2023-07-19 21:15:36,383 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:36,384 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(951): ClusterId : e738c531-40af-4a45-87ec-21eb8282ea3a 2023-07-19 21:15:36,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689801366382 2023-07-19 21:15:36,384 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 21:15:36,387 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 21:15:36,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,387 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:36,387 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 21:15:36,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 21:15:36,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 21:15:36,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 21:15:36,389 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:36,389 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:36,389 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:36,390 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:36,390 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:36,390 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:36,390 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 21:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 21:15:36,392 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:36,393 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:36,393 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:36,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801336390,5,FailOnTimeoutGroup] 2023-07-19 21:15:36,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801336395,5,FailOnTimeoutGroup] 2023-07-19 21:15:36,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,397 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ReadOnlyZKClient(139): Connect 0x6decd915 to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:36,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 21:15:36,397 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ReadOnlyZKClient(139): Connect 0x11dcc035 to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:36,397 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ReadOnlyZKClient(139): Connect 0x296ca9c1 to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:36,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,415 DEBUG [RS:2;jenkins-hbase4:45745] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f1b2c7e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:36,415 DEBUG [RS:0;jenkins-hbase4:44931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@632b3ad8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:36,415 DEBUG [RS:1;jenkins-hbase4:41399] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2394c2b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:36,416 DEBUG [RS:2;jenkins-hbase4:45745] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e262645, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:36,416 DEBUG [RS:0;jenkins-hbase4:44931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@221630e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:36,416 DEBUG [RS:1;jenkins-hbase4:41399] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@669648a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:36,424 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:36,425 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:36,425 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455 2023-07-19 21:15:36,427 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44931 2023-07-19 21:15:36,427 INFO [RS:0;jenkins-hbase4:44931] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:36,427 INFO [RS:0;jenkins-hbase4:44931] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:36,428 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:36,428 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45995,1689801335521 with isa=jenkins-hbase4.apache.org/172.31.14.131:44931, startcode=1689801335700 2023-07-19 21:15:36,428 DEBUG [RS:0;jenkins-hbase4:44931] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:36,430 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41399 2023-07-19 21:15:36,430 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45745 2023-07-19 21:15:36,430 INFO [RS:1;jenkins-hbase4:41399] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:36,430 INFO [RS:1;jenkins-hbase4:41399] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:36,430 INFO [RS:2;jenkins-hbase4:45745] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:36,430 INFO [RS:2;jenkins-hbase4:45745] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:36,430 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:36,430 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:36,431 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45995,1689801335521 with isa=jenkins-hbase4.apache.org/172.31.14.131:41399, startcode=1689801335868 2023-07-19 21:15:36,431 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45995,1689801335521 with isa=jenkins-hbase4.apache.org/172.31.14.131:45745, startcode=1689801336029 2023-07-19 21:15:36,431 DEBUG [RS:1;jenkins-hbase4:41399] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:36,431 DEBUG [RS:2;jenkins-hbase4:45745] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:36,431 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33675, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:36,434 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45995] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,434 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:36,435 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 21:15:36,435 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58453, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:36,435 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48453, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:36,435 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45995] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,435 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:36,435 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 21:15:36,435 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45995] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,436 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:36,436 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 21:15:36,436 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455 2023-07-19 21:15:36,436 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-19 21:15:36,436 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39249 2023-07-19 21:15:36,436 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455 2023-07-19 21:15:36,436 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-19 21:15:36,436 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455 2023-07-19 21:15:36,436 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39249 2023-07-19 21:15:36,436 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45035 2023-07-19 21:15:36,436 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39249 2023-07-19 21:15:36,438 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:36,445 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ZKUtil(162): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,445 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41399,1689801335868] 2023-07-19 21:15:36,445 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44931,1689801335700] 2023-07-19 21:15:36,445 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45745,1689801336029] 2023-07-19 21:15:36,445 WARN [RS:2;jenkins-hbase4:45745] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:36,445 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ZKUtil(162): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,445 INFO [RS:2;jenkins-hbase4:45745] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:36,445 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ZKUtil(162): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,448 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:36,446 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,446 WARN [RS:0;jenkins-hbase4:44931] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:36,448 WARN [RS:1;jenkins-hbase4:41399] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:36,448 INFO [RS:0;jenkins-hbase4:44931] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:36,448 INFO [RS:1;jenkins-hbase4:41399] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:36,448 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,448 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,455 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:36,459 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ZKUtil(162): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,459 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ZKUtil(162): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,459 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/info 2023-07-19 21:15:36,460 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ZKUtil(162): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,460 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ZKUtil(162): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,460 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ZKUtil(162): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,460 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:36,461 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ZKUtil(162): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,461 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ZKUtil(162): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,461 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ZKUtil(162): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,461 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,461 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ZKUtil(162): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,461 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:36,462 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:36,462 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:36,462 INFO [RS:2;jenkins-hbase4:45745] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:36,462 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:36,462 INFO [RS:0;jenkins-hbase4:44931] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:36,463 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:36,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:36,465 INFO [RS:1;jenkins-hbase4:41399] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:36,465 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,465 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:36,466 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/table 2023-07-19 21:15:36,466 INFO [RS:2;jenkins-hbase4:45745] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:36,467 INFO [RS:1;jenkins-hbase4:41399] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:36,467 INFO [RS:0;jenkins-hbase4:44931] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:36,467 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:36,467 INFO [RS:2;jenkins-hbase4:45745] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:36,467 INFO [RS:1;jenkins-hbase4:41399] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:36,467 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,467 INFO [RS:0;jenkins-hbase4:44931] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:36,467 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,467 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,467 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:36,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,468 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:36,468 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:36,471 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,471 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740 2023-07-19 21:15:36,471 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,471 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,471 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,471 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,471 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,471 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,471 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:1;jenkins-hbase4:41399] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,471 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,473 DEBUG [RS:2;jenkins-hbase4:45745] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,472 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,473 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,473 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,473 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,473 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,473 DEBUG [RS:0;jenkins-hbase4:44931] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:36,475 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:36,476 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:36,478 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,479 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,482 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:36,483 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10583906880, jitterRate=-0.014296859502792358}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:36,483 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:36,483 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:36,483 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:36,483 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:36,483 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:36,483 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:36,487 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:36,487 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:36,487 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:36,487 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 21:15:36,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 21:15:36,490 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 21:15:36,492 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 21:15:36,493 INFO [RS:1;jenkins-hbase4:41399] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:36,493 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41399,1689801335868-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,496 INFO [RS:0;jenkins-hbase4:44931] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:36,496 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44931,1689801335700-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,497 INFO [RS:2;jenkins-hbase4:45745] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:36,498 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45745,1689801336029-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,505 INFO [RS:1;jenkins-hbase4:41399] regionserver.Replication(203): jenkins-hbase4.apache.org,41399,1689801335868 started 2023-07-19 21:15:36,505 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41399,1689801335868, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41399, sessionid=0x1017f70a5cd0002 2023-07-19 21:15:36,507 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:36,507 DEBUG [RS:1;jenkins-hbase4:41399] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,507 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41399,1689801335868' 2023-07-19 21:15:36,507 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41399,1689801335868' 2023-07-19 21:15:36,508 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:36,508 INFO [RS:0;jenkins-hbase4:44931] regionserver.Replication(203): jenkins-hbase4.apache.org,44931,1689801335700 started 2023-07-19 21:15:36,508 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44931,1689801335700, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44931, sessionid=0x1017f70a5cd0001 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,509 DEBUG [RS:1;jenkins-hbase4:41399] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44931,1689801335700' 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:36,509 DEBUG [RS:1;jenkins-hbase4:41399] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:36,509 INFO [RS:1;jenkins-hbase4:41399] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44931,1689801335700' 2023-07-19 21:15:36,509 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:36,510 DEBUG [RS:0;jenkins-hbase4:44931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:36,510 DEBUG [RS:0;jenkins-hbase4:44931] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:36,510 INFO [RS:0;jenkins-hbase4:44931] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 21:15:36,511 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,511 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,512 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ZKUtil(398): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 21:15:36,512 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ZKUtil(398): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 21:15:36,512 INFO [RS:1;jenkins-hbase4:41399] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 21:15:36,512 INFO [RS:0;jenkins-hbase4:44931] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 21:15:36,512 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,512 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,513 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,513 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,513 INFO [RS:2;jenkins-hbase4:45745] regionserver.Replication(203): jenkins-hbase4.apache.org,45745,1689801336029 started 2023-07-19 21:15:36,513 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45745,1689801336029, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45745, sessionid=0x1017f70a5cd0003 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45745,1689801336029' 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45745,1689801336029' 2023-07-19 21:15:36,514 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:36,515 DEBUG [RS:2;jenkins-hbase4:45745] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:36,515 DEBUG [RS:2;jenkins-hbase4:45745] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:36,515 INFO [RS:2;jenkins-hbase4:45745] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-19 21:15:36,515 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,515 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ZKUtil(398): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-19 21:15:36,515 INFO [RS:2;jenkins-hbase4:45745] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-19 21:15:36,515 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,515 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,616 INFO [RS:1;jenkins-hbase4:41399] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41399%2C1689801335868, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,41399,1689801335868, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs, maxLogs=32 2023-07-19 21:15:36,616 INFO [RS:0;jenkins-hbase4:44931] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44931%2C1689801335700, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,44931,1689801335700, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs, maxLogs=32 2023-07-19 21:15:36,617 INFO [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45745%2C1689801336029, suffix=, logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,45745,1689801336029, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs, maxLogs=32 2023-07-19 21:15:36,633 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK] 2023-07-19 21:15:36,633 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK] 2023-07-19 21:15:36,633 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK] 2023-07-19 21:15:36,640 INFO [RS:0;jenkins-hbase4:44931] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,44931,1689801335700/jenkins-hbase4.apache.org%2C44931%2C1689801335700.1689801336617 2023-07-19 21:15:36,640 DEBUG [RS:0;jenkins-hbase4:44931] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK], DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK], DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK]] 2023-07-19 21:15:36,643 DEBUG [jenkins-hbase4:45995] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 21:15:36,643 DEBUG [jenkins-hbase4:45995] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:36,644 DEBUG [jenkins-hbase4:45995] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:36,644 DEBUG [jenkins-hbase4:45995] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:36,644 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK] 2023-07-19 21:15:36,644 DEBUG [jenkins-hbase4:45995] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:36,644 DEBUG [jenkins-hbase4:45995] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:36,647 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK] 2023-07-19 21:15:36,647 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK] 2023-07-19 21:15:36,650 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK] 2023-07-19 21:15:36,650 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45745,1689801336029, state=OPENING 2023-07-19 21:15:36,650 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK] 2023-07-19 21:15:36,651 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK] 2023-07-19 21:15:36,651 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 21:15:36,653 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:36,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45745,1689801336029}] 2023-07-19 21:15:36,653 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:36,657 INFO [RS:1;jenkins-hbase4:41399] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,41399,1689801335868/jenkins-hbase4.apache.org%2C41399%2C1689801335868.1689801336617 2023-07-19 21:15:36,657 INFO [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,45745,1689801336029/jenkins-hbase4.apache.org%2C45745%2C1689801336029.1689801336617 2023-07-19 21:15:36,657 DEBUG [RS:1;jenkins-hbase4:41399] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK], DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK], DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK]] 2023-07-19 21:15:36,657 DEBUG [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK], DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK], DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK]] 2023-07-19 21:15:36,673 WARN [ReadOnlyZKClient-127.0.0.1:51495@0x160047a1] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 21:15:36,674 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:36,675 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:36,676 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45745] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:43986 deadline: 1689801396675, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,809 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:36,810 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:36,812 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44002, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:36,816 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 21:15:36,816 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:36,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45745%2C1689801336029.meta, suffix=.meta, logDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,45745,1689801336029, archiveDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs, maxLogs=32 2023-07-19 21:15:36,833 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK] 2023-07-19 21:15:36,834 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK] 2023-07-19 21:15:36,833 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK] 2023-07-19 21:15:36,836 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/WALs/jenkins-hbase4.apache.org,45745,1689801336029/jenkins-hbase4.apache.org%2C45745%2C1689801336029.meta.1689801336818.meta 2023-07-19 21:15:36,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-a444fce9-a0e4-46b7-b96e-e1558fb2ebf9,DISK], DatanodeInfoWithStorage[127.0.0.1:42443,DS-b3f00c7b-abff-4712-a7a8-06b8ac21f347,DISK], DatanodeInfoWithStorage[127.0.0.1:40675,DS-c4a2f71a-d32a-440d-b1e9-df697582d61c,DISK]] 2023-07-19 21:15:36,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 21:15:36,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 21:15:36,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 21:15:36,841 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:36,842 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/info 2023-07-19 21:15:36,842 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/info 2023-07-19 21:15:36,842 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:36,843 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,843 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:36,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:36,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:36,844 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:36,845 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,845 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:36,845 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/table 2023-07-19 21:15:36,845 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/table 2023-07-19 21:15:36,846 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:36,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:36,847 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740 2023-07-19 21:15:36,848 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740 2023-07-19 21:15:36,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:36,852 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:36,853 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11900870880, jitterRate=0.10835497081279755}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:36,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:36,854 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689801336809 2023-07-19 21:15:36,860 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 21:15:36,860 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 21:15:36,861 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45745,1689801336029, state=OPEN 2023-07-19 21:15:36,863 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:36,863 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:36,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 21:15:36,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45745,1689801336029 in 210 msec 2023-07-19 21:15:36,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 21:15:36,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 377 msec 2023-07-19 21:15:36,868 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 501 msec 2023-07-19 21:15:36,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689801336868, completionTime=-1 2023-07-19 21:15:36,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 21:15:36,868 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 21:15:36,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 21:15:36,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689801396873 2023-07-19 21:15:36,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689801456873 2023-07-19 21:15:36,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-19 21:15:36,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45995,1689801335521-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45995,1689801335521-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45995,1689801335521-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45995, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:36,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 21:15:36,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:36,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 21:15:36,881 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 21:15:36,881 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:36,882 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:36,883 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/namespace/794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:36,884 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/namespace/794719845e52e5a9725091870bb8beb7 empty. 2023-07-19 21:15:36,884 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/namespace/794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:36,884 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 21:15:36,898 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:36,900 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 794719845e52e5a9725091870bb8beb7, NAME => 'hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp 2023-07-19 21:15:36,908 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:36,909 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 794719845e52e5a9725091870bb8beb7, disabling compactions & flushes 2023-07-19 21:15:36,909 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:36,909 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:36,909 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. after waiting 0 ms 2023-07-19 21:15:36,909 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:36,909 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:36,909 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 794719845e52e5a9725091870bb8beb7: 2023-07-19 21:15:36,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:36,912 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801336912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801336912"}]},"ts":"1689801336912"} 2023-07-19 21:15:36,914 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:36,915 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:36,915 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801336915"}]},"ts":"1689801336915"} 2023-07-19 21:15:36,916 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 21:15:36,920 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:36,920 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:36,920 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:36,920 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:36,920 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:36,920 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=794719845e52e5a9725091870bb8beb7, ASSIGN}] 2023-07-19 21:15:36,922 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=794719845e52e5a9725091870bb8beb7, ASSIGN 2023-07-19 21:15:36,923 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=794719845e52e5a9725091870bb8beb7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41399,1689801335868; forceNewPlan=false, retain=false 2023-07-19 21:15:36,978 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:36,980 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 21:15:36,982 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:36,983 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:36,985 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:36,985 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24 empty. 2023-07-19 21:15:36,986 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:36,986 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 21:15:37,007 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:37,014 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7f397cfb91d0a3c0f0fe3072356f3d24, NAME => 'hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 7f397cfb91d0a3c0f0fe3072356f3d24, disabling compactions & flushes 2023-07-19 21:15:37,024 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. after waiting 0 ms 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,024 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,024 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 7f397cfb91d0a3c0f0fe3072356f3d24: 2023-07-19 21:15:37,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:37,027 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801337027"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801337027"}]},"ts":"1689801337027"} 2023-07-19 21:15:37,028 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:37,029 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:37,029 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337029"}]},"ts":"1689801337029"} 2023-07-19 21:15:37,030 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 21:15:37,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:37,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:37,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:37,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:37,033 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:37,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7f397cfb91d0a3c0f0fe3072356f3d24, ASSIGN}] 2023-07-19 21:15:37,034 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7f397cfb91d0a3c0f0fe3072356f3d24, ASSIGN 2023-07-19 21:15:37,035 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7f397cfb91d0a3c0f0fe3072356f3d24, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44931,1689801335700; forceNewPlan=false, retain=false 2023-07-19 21:15:37,035 INFO [jenkins-hbase4:45995] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 21:15:37,037 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=794719845e52e5a9725091870bb8beb7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:37,037 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801337037"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801337037"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801337037"}]},"ts":"1689801337037"} 2023-07-19 21:15:37,037 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7f397cfb91d0a3c0f0fe3072356f3d24, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:37,037 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801337037"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801337037"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801337037"}]},"ts":"1689801337037"} 2023-07-19 21:15:37,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 794719845e52e5a9725091870bb8beb7, server=jenkins-hbase4.apache.org,41399,1689801335868}] 2023-07-19 21:15:37,039 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 7f397cfb91d0a3c0f0fe3072356f3d24, server=jenkins-hbase4.apache.org,44931,1689801335700}] 2023-07-19 21:15:37,191 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:37,192 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:37,192 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:37,192 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:37,194 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59760, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:37,194 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50736, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:37,201 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,201 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f397cfb91d0a3c0f0fe3072356f3d24, NAME => 'hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 794719845e52e5a9725091870bb8beb7, NAME => 'hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. service=MultiRowMutationService 2023-07-19 21:15:37,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,202 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,202 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,206 INFO [StoreOpener-794719845e52e5a9725091870bb8beb7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,207 INFO [StoreOpener-7f397cfb91d0a3c0f0fe3072356f3d24-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,208 DEBUG [StoreOpener-794719845e52e5a9725091870bb8beb7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/info 2023-07-19 21:15:37,208 DEBUG [StoreOpener-794719845e52e5a9725091870bb8beb7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/info 2023-07-19 21:15:37,208 DEBUG [StoreOpener-7f397cfb91d0a3c0f0fe3072356f3d24-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/m 2023-07-19 21:15:37,208 DEBUG [StoreOpener-7f397cfb91d0a3c0f0fe3072356f3d24-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/m 2023-07-19 21:15:37,209 INFO [StoreOpener-794719845e52e5a9725091870bb8beb7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 794719845e52e5a9725091870bb8beb7 columnFamilyName info 2023-07-19 21:15:37,209 INFO [StoreOpener-7f397cfb91d0a3c0f0fe3072356f3d24-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f397cfb91d0a3c0f0fe3072356f3d24 columnFamilyName m 2023-07-19 21:15:37,209 INFO [StoreOpener-794719845e52e5a9725091870bb8beb7-1] regionserver.HStore(310): Store=794719845e52e5a9725091870bb8beb7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:37,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,210 INFO [StoreOpener-7f397cfb91d0a3c0f0fe3072356f3d24-1] regionserver.HStore(310): Store=7f397cfb91d0a3c0f0fe3072356f3d24/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:37,211 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,212 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:37,214 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:37,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:37,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:37,218 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 794719845e52e5a9725091870bb8beb7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10209400960, jitterRate=-0.0491754412651062}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:37,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 794719845e52e5a9725091870bb8beb7: 2023-07-19 21:15:37,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f397cfb91d0a3c0f0fe3072356f3d24; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5ee6d5e4, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:37,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f397cfb91d0a3c0f0fe3072356f3d24: 2023-07-19 21:15:37,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7., pid=8, masterSystemTime=1689801337191 2023-07-19 21:15:37,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24., pid=9, masterSystemTime=1689801337191 2023-07-19 21:15:37,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:37,226 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:37,226 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=794719845e52e5a9725091870bb8beb7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:37,226 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801337226"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801337226"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801337226"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801337226"}]},"ts":"1689801337226"} 2023-07-19 21:15:37,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,227 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:37,227 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7f397cfb91d0a3c0f0fe3072356f3d24, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:37,227 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801337227"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801337227"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801337227"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801337227"}]},"ts":"1689801337227"} 2023-07-19 21:15:37,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 21:15:37,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 794719845e52e5a9725091870bb8beb7, server=jenkins-hbase4.apache.org,41399,1689801335868 in 190 msec 2023-07-19 21:15:37,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 21:15:37,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 7f397cfb91d0a3c0f0fe3072356f3d24, server=jenkins-hbase4.apache.org,44931,1689801335700 in 190 msec 2023-07-19 21:15:37,232 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 21:15:37,233 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=794719845e52e5a9725091870bb8beb7, ASSIGN in 310 msec 2023-07-19 21:15:37,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 21:15:37,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7f397cfb91d0a3c0f0fe3072356f3d24, ASSIGN in 198 msec 2023-07-19 21:15:37,233 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:37,233 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337233"}]},"ts":"1689801337233"} 2023-07-19 21:15:37,234 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:37,235 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337235"}]},"ts":"1689801337235"} 2023-07-19 21:15:37,235 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 21:15:37,236 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 21:15:37,238 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:37,239 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 359 msec 2023-07-19 21:15:37,240 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:37,241 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 262 msec 2023-07-19 21:15:37,281 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 21:15:37,282 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:37,282 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:37,286 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:37,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:37,291 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50750, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:37,301 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:37,301 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 21:15:37,301 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 21:15:37,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 21:15:37,308 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:37,308 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:37,310 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:37,312 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:37,313 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45995,1689801335521] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 21:15:37,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-19 21:15:37,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 21:15:37,334 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:37,338 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-07-19 21:15:37,351 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 21:15:37,355 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 21:15:37,355 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.160sec 2023-07-19 21:15:37,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-19 21:15:37,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:37,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-19 21:15:37,359 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-19 21:15:37,361 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:37,361 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:37,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-19 21:15:37,363 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:37,364 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b empty. 2023-07-19 21:15:37,364 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:37,364 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-19 21:15:37,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-19 21:15:37,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-19 21:15:37,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:37,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:37,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 21:15:37,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 21:15:37,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45995,1689801335521-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 21:15:37,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45995,1689801335521-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 21:15:37,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 21:15:37,382 DEBUG [Listener at localhost/37503] zookeeper.ReadOnlyZKClient(139): Connect 0x6b279b3b to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:37,390 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:37,395 DEBUG [Listener at localhost/37503] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e6c2083, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:37,395 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6b5ad6f078e5a10c90943cf75c7df84b, NAME => 'hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp 2023-07-19 21:15:37,399 DEBUG [hconnection-0x64b278aa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:37,401 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44012, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:37,402 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:37,403 INFO [Listener at localhost/37503] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:37,411 DEBUG [Listener at localhost/37503] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 21:15:37,415 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33560, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 21:15:37,419 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 21:15:37,419 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:37,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 21:15:37,421 DEBUG [Listener at localhost/37503] zookeeper.ReadOnlyZKClient(139): Connect 0x408dab3e to 127.0.0.1:51495 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:37,435 DEBUG [Listener at localhost/37503] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49b7c08e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:37,436 INFO [Listener at localhost/37503] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51495 2023-07-19 21:15:37,440 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:37,442 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017f70a5cd000a connected 2023-07-19 21:15:37,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-19 21:15:37,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-19 21:15:37,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-19 21:15:37,456 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:37,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 16 msec 2023-07-19 21:15:37,514 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 21:15:37,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-19 21:15:37,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:37,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-19 21:15:37,564 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:37,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 14 2023-07-19 21:15:37,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 21:15:37,568 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:37,569 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:37,572 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:37,574 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,575 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 empty. 2023-07-19 21:15:37,575 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,575 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 21:15:37,602 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:37,604 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => eb66e2a2ee0b557a4f75233b42514d96, NAME => 'np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp 2023-07-19 21:15:37,620 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,621 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing eb66e2a2ee0b557a4f75233b42514d96, disabling compactions & flushes 2023-07-19 21:15:37,621 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,621 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,621 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. after waiting 0 ms 2023-07-19 21:15:37,621 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,621 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,621 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for eb66e2a2ee0b557a4f75233b42514d96: 2023-07-19 21:15:37,624 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:37,625 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801337625"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801337625"}]},"ts":"1689801337625"} 2023-07-19 21:15:37,626 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:37,627 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:37,627 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337627"}]},"ts":"1689801337627"} 2023-07-19 21:15:37,628 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-19 21:15:37,632 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:37,632 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:37,633 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:37,633 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:37,633 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:37,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, ASSIGN}] 2023-07-19 21:15:37,634 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, ASSIGN 2023-07-19 21:15:37,635 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41399,1689801335868; forceNewPlan=false, retain=false 2023-07-19 21:15:37,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 21:15:37,785 INFO [jenkins-hbase4:45995] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:37,786 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=eb66e2a2ee0b557a4f75233b42514d96, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:37,787 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801337786"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801337786"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801337786"}]},"ts":"1689801337786"} 2023-07-19 21:15:37,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE; OpenRegionProcedure eb66e2a2ee0b557a4f75233b42514d96, server=jenkins-hbase4.apache.org,41399,1689801335868}] 2023-07-19 21:15:37,828 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,828 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 6b5ad6f078e5a10c90943cf75c7df84b, disabling compactions & flushes 2023-07-19 21:15:37,828 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:37,828 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:37,828 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. after waiting 0 ms 2023-07-19 21:15:37,828 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:37,829 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:37,829 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 6b5ad6f078e5a10c90943cf75c7df84b: 2023-07-19 21:15:37,833 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:37,834 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689801337834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801337834"}]},"ts":"1689801337834"} 2023-07-19 21:15:37,836 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:37,838 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:37,838 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337838"}]},"ts":"1689801337838"} 2023-07-19 21:15:37,840 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-19 21:15:37,843 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:37,844 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:37,844 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:37,844 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:37,844 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:37,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6b5ad6f078e5a10c90943cf75c7df84b, ASSIGN}] 2023-07-19 21:15:37,845 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6b5ad6f078e5a10c90943cf75c7df84b, ASSIGN 2023-07-19 21:15:37,846 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6b5ad6f078e5a10c90943cf75c7df84b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44931,1689801335700; forceNewPlan=false, retain=false 2023-07-19 21:15:37,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 21:15:37,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb66e2a2ee0b557a4f75233b42514d96, NAME => 'np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:37,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:37,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,946 INFO [StoreOpener-eb66e2a2ee0b557a4f75233b42514d96-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,948 DEBUG [StoreOpener-eb66e2a2ee0b557a4f75233b42514d96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/fam1 2023-07-19 21:15:37,948 DEBUG [StoreOpener-eb66e2a2ee0b557a4f75233b42514d96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/fam1 2023-07-19 21:15:37,948 INFO [StoreOpener-eb66e2a2ee0b557a4f75233b42514d96-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb66e2a2ee0b557a4f75233b42514d96 columnFamilyName fam1 2023-07-19 21:15:37,949 INFO [StoreOpener-eb66e2a2ee0b557a4f75233b42514d96-1] regionserver.HStore(310): Store=eb66e2a2ee0b557a4f75233b42514d96/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:37,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:37,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:37,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb66e2a2ee0b557a4f75233b42514d96; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10261865760, jitterRate=-0.044289276003837585}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:37,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb66e2a2ee0b557a4f75233b42514d96: 2023-07-19 21:15:37,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96., pid=16, masterSystemTime=1689801337940 2023-07-19 21:15:37,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:37,959 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=eb66e2a2ee0b557a4f75233b42514d96, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:37,959 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801337959"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801337959"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801337959"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801337959"}]},"ts":"1689801337959"} 2023-07-19 21:15:37,962 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-19 21:15:37,962 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; OpenRegionProcedure eb66e2a2ee0b557a4f75233b42514d96, server=jenkins-hbase4.apache.org,41399,1689801335868 in 172 msec 2023-07-19 21:15:37,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=14 2023-07-19 21:15:37,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=14, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, ASSIGN in 329 msec 2023-07-19 21:15:37,967 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:37,967 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801337967"}]},"ts":"1689801337967"} 2023-07-19 21:15:37,969 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-19 21:15:37,972 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:37,974 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateTableProcedure table=np1:table1 in 412 msec 2023-07-19 21:15:37,997 INFO [jenkins-hbase4:45995] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:37,999 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6b5ad6f078e5a10c90943cf75c7df84b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:37,999 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689801337998"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801337998"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801337998"}]},"ts":"1689801337998"} 2023-07-19 21:15:38,001 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 6b5ad6f078e5a10c90943cf75c7df84b, server=jenkins-hbase4.apache.org,44931,1689801335700}] 2023-07-19 21:15:38,159 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6b5ad6f078e5a10c90943cf75c7df84b, NAME => 'hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:38,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:38,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,163 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,165 DEBUG [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/q 2023-07-19 21:15:38,165 DEBUG [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/q 2023-07-19 21:15:38,165 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6b5ad6f078e5a10c90943cf75c7df84b columnFamilyName q 2023-07-19 21:15:38,166 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] regionserver.HStore(310): Store=6b5ad6f078e5a10c90943cf75c7df84b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:38,166 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,168 DEBUG [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/u 2023-07-19 21:15:38,168 DEBUG [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/u 2023-07-19 21:15:38,168 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6b5ad6f078e5a10c90943cf75c7df84b columnFamilyName u 2023-07-19 21:15:38,168 INFO [StoreOpener-6b5ad6f078e5a10c90943cf75c7df84b-1] regionserver.HStore(310): Store=6b5ad6f078e5a10c90943cf75c7df84b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:38,169 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-19 21:15:38,172 INFO [Listener at localhost/37503] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 14 completed 2023-07-19 21:15:38,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-19 21:15:38,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:38,173 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-19 21:15:38,176 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:38,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-19 21:15:38,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 21:15:38,183 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:38,184 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6b5ad6f078e5a10c90943cf75c7df84b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11437295520, jitterRate=0.06518115103244781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-19 21:15:38,184 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6b5ad6f078e5a10c90943cf75c7df84b: 2023-07-19 21:15:38,185 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b., pid=18, masterSystemTime=1689801338152 2023-07-19 21:15:38,191 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,191 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,191 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6b5ad6f078e5a10c90943cf75c7df84b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:38,191 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689801338191"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801338191"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801338191"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801338191"}]},"ts":"1689801338191"} 2023-07-19 21:15:38,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 21:15:38,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 6b5ad6f078e5a10c90943cf75c7df84b, server=jenkins-hbase4.apache.org,44931,1689801335700 in 192 msec 2023-07-19 21:15:38,203 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-19 21:15:38,204 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6b5ad6f078e5a10c90943cf75c7df84b, ASSIGN in 353 msec 2023-07-19 21:15:38,204 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:38,204 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801338204"}]},"ts":"1689801338204"} 2023-07-19 21:15:38,206 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-19 21:15:38,208 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:38,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=34 msec 2023-07-19 21:15:38,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 850 msec 2023-07-19 21:15:38,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 21:15:38,283 INFO [Listener at localhost/37503] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-19 21:15:38,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:38,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:38,287 INFO [Listener at localhost/37503] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-19 21:15:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-19 21:15:38,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-19 21:15:38,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 21:15:38,295 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801338295"}]},"ts":"1689801338295"} 2023-07-19 21:15:38,299 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-19 21:15:38,300 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-19 21:15:38,301 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, UNASSIGN}] 2023-07-19 21:15:38,302 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, UNASSIGN 2023-07-19 21:15:38,302 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=eb66e2a2ee0b557a4f75233b42514d96, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:38,302 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801338302"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801338302"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801338302"}]},"ts":"1689801338302"} 2023-07-19 21:15:38,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure eb66e2a2ee0b557a4f75233b42514d96, server=jenkins-hbase4.apache.org,41399,1689801335868}] 2023-07-19 21:15:38,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 21:15:38,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:38,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb66e2a2ee0b557a4f75233b42514d96, disabling compactions & flushes 2023-07-19 21:15:38,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:38,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:38,457 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. after waiting 0 ms 2023-07-19 21:15:38,457 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:38,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:38,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96. 2023-07-19 21:15:38,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb66e2a2ee0b557a4f75233b42514d96: 2023-07-19 21:15:38,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:38,471 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=eb66e2a2ee0b557a4f75233b42514d96, regionState=CLOSED 2023-07-19 21:15:38,471 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801338471"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801338471"}]},"ts":"1689801338471"} 2023-07-19 21:15:38,474 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-19 21:15:38,474 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure eb66e2a2ee0b557a4f75233b42514d96, server=jenkins-hbase4.apache.org,41399,1689801335868 in 169 msec 2023-07-19 21:15:38,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-19 21:15:38,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=eb66e2a2ee0b557a4f75233b42514d96, UNASSIGN in 173 msec 2023-07-19 21:15:38,477 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801338476"}]},"ts":"1689801338476"} 2023-07-19 21:15:38,482 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-19 21:15:38,484 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-19 21:15:38,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 197 msec 2023-07-19 21:15:38,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 21:15:38,594 INFO [Listener at localhost/37503] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-19 21:15:38,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-19 21:15:38,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,597 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-19 21:15:38,598 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:38,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:38,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 21:15:38,606 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:38,608 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/fam1, FileablePath, hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/recovered.edits] 2023-07-19 21:15:38,613 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/recovered.edits/4.seqid to hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/archive/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96/recovered.edits/4.seqid 2023-07-19 21:15:38,614 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/.tmp/data/np1/table1/eb66e2a2ee0b557a4f75233b42514d96 2023-07-19 21:15:38,614 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-19 21:15:38,616 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,618 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-19 21:15:38,620 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-19 21:15:38,621 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,621 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-19 21:15:38,621 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801338621"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:38,623 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 21:15:38,623 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => eb66e2a2ee0b557a4f75233b42514d96, NAME => 'np1:table1,,1689801337560.eb66e2a2ee0b557a4f75233b42514d96.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 21:15:38,623 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-19 21:15:38,623 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801338623"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:38,625 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-19 21:15:38,627 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-19 21:15:38,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 32 msec 2023-07-19 21:15:38,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-19 21:15:38,706 INFO [Listener at localhost/37503] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-19 21:15:38,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-19 21:15:38,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,722 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,725 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,727 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 21:15:38,729 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-19 21:15:38,729 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:38,729 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,731 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-19 21:15:38,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-19 21:15:38,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45995] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-19 21:15:38,829 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 21:15:38,829 INFO [Listener at localhost/37503] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b279b3b to 127.0.0.1:51495 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] util.JVMClusterUtil(257): Found active master hash=492004527, stopped=false 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 21:15:38,830 DEBUG [Listener at localhost/37503] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-19 21:15:38,830 INFO [Listener at localhost/37503] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:38,833 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:38,833 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:38,833 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:38,833 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:38,833 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:38,833 INFO [Listener at localhost/37503] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 21:15:38,834 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:38,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:38,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:38,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:38,839 DEBUG [Listener at localhost/37503] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x160047a1 to 127.0.0.1:51495 2023-07-19 21:15:38,839 DEBUG [Listener at localhost/37503] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44931,1689801335700' ***** 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41399,1689801335868' ***** 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45745,1689801336029' ***** 2023-07-19 21:15:38,839 INFO [Listener at localhost/37503] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:38,839 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:38,839 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:38,839 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:38,854 INFO [RS:0;jenkins-hbase4:44931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67054ef5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:38,854 INFO [RS:2;jenkins-hbase4:45745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@76fdc157{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:38,854 INFO [RS:1;jenkins-hbase4:41399] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@440c4fd4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:38,854 INFO [RS:0;jenkins-hbase4:44931] server.AbstractConnector(383): Stopped ServerConnector@151e3f23{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:38,854 INFO [RS:2;jenkins-hbase4:45745] server.AbstractConnector(383): Stopped ServerConnector@3d692739{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:38,854 INFO [RS:1;jenkins-hbase4:41399] server.AbstractConnector(383): Stopped ServerConnector@2b381da4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:38,854 INFO [RS:0;jenkins-hbase4:44931] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:38,855 INFO [RS:1;jenkins-hbase4:41399] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:38,854 INFO [RS:2;jenkins-hbase4:45745] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:38,855 INFO [RS:0;jenkins-hbase4:44931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3388f574{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:38,858 INFO [RS:1;jenkins-hbase4:41399] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c19355a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:38,858 INFO [RS:2;jenkins-hbase4:45745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37ae8bbb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:38,858 INFO [RS:1;jenkins-hbase4:41399] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@cd78568{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:38,858 INFO [RS:0;jenkins-hbase4:44931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50d22f03{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:38,859 INFO [RS:2;jenkins-hbase4:45745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50d1f456{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:38,859 INFO [RS:1;jenkins-hbase4:41399] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:38,859 INFO [RS:0;jenkins-hbase4:44931] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:38,859 INFO [RS:1;jenkins-hbase4:41399] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:38,859 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:38,859 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:38,859 INFO [RS:0;jenkins-hbase4:44931] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:38,862 INFO [RS:0;jenkins-hbase4:44931] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:38,862 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(3305): Received CLOSE for 6b5ad6f078e5a10c90943cf75c7df84b 2023-07-19 21:15:38,859 INFO [RS:1;jenkins-hbase4:41399] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:38,862 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(3305): Received CLOSE for 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:38,862 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(3305): Received CLOSE for 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:38,862 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:38,862 INFO [RS:2;jenkins-hbase4:45745] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:38,863 DEBUG [RS:0;jenkins-hbase4:44931] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6decd915 to 127.0.0.1:51495 2023-07-19 21:15:38,863 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:38,866 DEBUG [RS:0;jenkins-hbase4:44931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:38,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6b5ad6f078e5a10c90943cf75c7df84b, disabling compactions & flushes 2023-07-19 21:15:38,866 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-19 21:15:38,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,866 DEBUG [RS:1;jenkins-hbase4:41399] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x296ca9c1 to 127.0.0.1:51495 2023-07-19 21:15:38,865 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:38,865 INFO [RS:2;jenkins-hbase4:45745] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:38,867 INFO [RS:2;jenkins-hbase4:45745] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:38,868 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:38,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 794719845e52e5a9725091870bb8beb7, disabling compactions & flushes 2023-07-19 21:15:38,868 DEBUG [RS:2;jenkins-hbase4:45745] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11dcc035 to 127.0.0.1:51495 2023-07-19 21:15:38,867 DEBUG [RS:1;jenkins-hbase4:41399] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:38,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,866 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1478): Online Regions={6b5ad6f078e5a10c90943cf75c7df84b=hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b., 7f397cfb91d0a3c0f0fe3072356f3d24=hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24.} 2023-07-19 21:15:38,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. after waiting 0 ms 2023-07-19 21:15:38,869 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 21:15:38,868 DEBUG [RS:2;jenkins-hbase4:45745] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:38,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:38,869 INFO [RS:2;jenkins-hbase4:45745] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:38,869 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1478): Online Regions={794719845e52e5a9725091870bb8beb7=hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7.} 2023-07-19 21:15:38,869 DEBUG [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1504): Waiting on 6b5ad6f078e5a10c90943cf75c7df84b, 7f397cfb91d0a3c0f0fe3072356f3d24 2023-07-19 21:15:38,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,870 DEBUG [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1504): Waiting on 794719845e52e5a9725091870bb8beb7 2023-07-19 21:15:38,870 INFO [RS:2;jenkins-hbase4:45745] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:38,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:38,870 INFO [RS:2;jenkins-hbase4:45745] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:38,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. after waiting 0 ms 2023-07-19 21:15:38,870 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 21:15:38,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:38,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 794719845e52e5a9725091870bb8beb7 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-19 21:15:38,870 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 21:15:38,870 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-19 21:15:38,870 DEBUG [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-19 21:15:38,871 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:38,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:38,871 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:38,871 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:38,871 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:38,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-19 21:15:38,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/quota/6b5ad6f078e5a10c90943cf75c7df84b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:38,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6b5ad6f078e5a10c90943cf75c7df84b: 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689801337358.6b5ad6f078e5a10c90943cf75c7df84b. 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f397cfb91d0a3c0f0fe3072356f3d24, disabling compactions & flushes 2023-07-19 21:15:38,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. after waiting 0 ms 2023-07-19 21:15:38,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:38,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7f397cfb91d0a3c0f0fe3072356f3d24 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-19 21:15:38,880 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:38,882 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:38,883 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:38,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/info/24105d3655f74bbea7bdce9f78d30c93 2023-07-19 21:15:38,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/.tmp/m/43fd8d9a3cc44b3caa287ea54d5b88ec 2023-07-19 21:15:38,910 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 24105d3655f74bbea7bdce9f78d30c93 2023-07-19 21:15:38,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/.tmp/m/43fd8d9a3cc44b3caa287ea54d5b88ec as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/m/43fd8d9a3cc44b3caa287ea54d5b88ec 2023-07-19 21:15:38,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/.tmp/info/4d54af9cf3f949db90d410897bc7dc43 2023-07-19 21:15:38,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/m/43fd8d9a3cc44b3caa287ea54d5b88ec, entries=1, sequenceid=7, filesize=4.9 K 2023-07-19 21:15:38,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 7f397cfb91d0a3c0f0fe3072356f3d24 in 50ms, sequenceid=7, compaction requested=false 2023-07-19 21:15:38,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-19 21:15:38,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d54af9cf3f949db90d410897bc7dc43 2023-07-19 21:15:38,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/.tmp/info/4d54af9cf3f949db90d410897bc7dc43 as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/info/4d54af9cf3f949db90d410897bc7dc43 2023-07-19 21:15:38,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d54af9cf3f949db90d410897bc7dc43 2023-07-19 21:15:38,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/info/4d54af9cf3f949db90d410897bc7dc43, entries=3, sequenceid=8, filesize=5.0 K 2023-07-19 21:15:38,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 794719845e52e5a9725091870bb8beb7 in 63ms, sequenceid=8, compaction requested=false 2023-07-19 21:15:38,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-19 21:15:38,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/rep_barrier/a1be4ca242fb4f2eb0389b461ede0deb 2023-07-19 21:15:38,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/rsgroup/7f397cfb91d0a3c0f0fe3072356f3d24/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-19 21:15:38,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/namespace/794719845e52e5a9725091870bb8beb7/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-19 21:15:38,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:38,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:38,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:38,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f397cfb91d0a3c0f0fe3072356f3d24: 2023-07-19 21:15:38,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689801336978.7f397cfb91d0a3c0f0fe3072356f3d24. 2023-07-19 21:15:38,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 794719845e52e5a9725091870bb8beb7: 2023-07-19 21:15:38,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689801336879.794719845e52e5a9725091870bb8beb7. 2023-07-19 21:15:38,951 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a1be4ca242fb4f2eb0389b461ede0deb 2023-07-19 21:15:38,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/table/16b1111da0cc4964a182af8be6b17df3 2023-07-19 21:15:38,971 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 16b1111da0cc4964a182af8be6b17df3 2023-07-19 21:15:38,972 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/info/24105d3655f74bbea7bdce9f78d30c93 as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/info/24105d3655f74bbea7bdce9f78d30c93 2023-07-19 21:15:38,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 24105d3655f74bbea7bdce9f78d30c93 2023-07-19 21:15:38,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/info/24105d3655f74bbea7bdce9f78d30c93, entries=32, sequenceid=31, filesize=8.5 K 2023-07-19 21:15:38,978 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/rep_barrier/a1be4ca242fb4f2eb0389b461ede0deb as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/rep_barrier/a1be4ca242fb4f2eb0389b461ede0deb 2023-07-19 21:15:38,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a1be4ca242fb4f2eb0389b461ede0deb 2023-07-19 21:15:38,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/rep_barrier/a1be4ca242fb4f2eb0389b461ede0deb, entries=1, sequenceid=31, filesize=4.9 K 2023-07-19 21:15:38,983 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/.tmp/table/16b1111da0cc4964a182af8be6b17df3 as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/table/16b1111da0cc4964a182af8be6b17df3 2023-07-19 21:15:38,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 16b1111da0cc4964a182af8be6b17df3 2023-07-19 21:15:38,989 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/table/16b1111da0cc4964a182af8be6b17df3, entries=8, sequenceid=31, filesize=5.2 K 2023-07-19 21:15:38,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 120ms, sequenceid=31, compaction requested=false 2023-07-19 21:15:38,991 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-19 21:15:39,004 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-19 21:15:39,004 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:39,005 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:39,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:39,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:39,070 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44931,1689801335700; all regions closed. 2023-07-19 21:15:39,070 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41399,1689801335868; all regions closed. 2023-07-19 21:15:39,070 DEBUG [RS:0;jenkins-hbase4:44931] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 21:15:39,070 DEBUG [RS:1;jenkins-hbase4:41399] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 21:15:39,070 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45745,1689801336029; all regions closed. 2023-07-19 21:15:39,071 DEBUG [RS:2;jenkins-hbase4:45745] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-19 21:15:39,083 DEBUG [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs 2023-07-19 21:15:39,083 DEBUG [RS:0;jenkins-hbase4:44931] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs 2023-07-19 21:15:39,083 INFO [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45745%2C1689801336029.meta:.meta(num 1689801336818) 2023-07-19 21:15:39,083 INFO [RS:0;jenkins-hbase4:44931] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44931%2C1689801335700:(num 1689801336617) 2023-07-19 21:15:39,083 DEBUG [RS:0;jenkins-hbase4:44931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:39,083 INFO [RS:0;jenkins-hbase4:44931] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:39,084 INFO [RS:0;jenkins-hbase4:44931] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:39,084 INFO [RS:0;jenkins-hbase4:44931] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:39,084 INFO [RS:0;jenkins-hbase4:44931] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:39,084 INFO [RS:0;jenkins-hbase4:44931] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:39,084 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:39,085 INFO [RS:0;jenkins-hbase4:44931] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44931 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44931,1689801335700 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:39,088 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:39,090 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44931,1689801335700] 2023-07-19 21:15:39,090 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44931,1689801335700; numProcessing=1 2023-07-19 21:15:39,090 DEBUG [RS:1;jenkins-hbase4:41399] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs 2023-07-19 21:15:39,090 INFO [RS:1;jenkins-hbase4:41399] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41399%2C1689801335868:(num 1689801336617) 2023-07-19 21:15:39,090 DEBUG [RS:1;jenkins-hbase4:41399] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:39,091 INFO [RS:1;jenkins-hbase4:41399] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:39,091 INFO [RS:1;jenkins-hbase4:41399] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:39,091 INFO [RS:1;jenkins-hbase4:41399] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:39,091 INFO [RS:1;jenkins-hbase4:41399] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:39,091 INFO [RS:1;jenkins-hbase4:41399] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:39,091 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:39,091 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44931,1689801335700 already deleted, retry=false 2023-07-19 21:15:39,091 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44931,1689801335700 expired; onlineServers=2 2023-07-19 21:15:39,092 INFO [RS:1;jenkins-hbase4:41399] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41399 2023-07-19 21:15:39,094 DEBUG [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/oldWALs 2023-07-19 21:15:39,094 INFO [RS:2;jenkins-hbase4:45745] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45745%2C1689801336029:(num 1689801336617) 2023-07-19 21:15:39,094 DEBUG [RS:2;jenkins-hbase4:45745] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:39,094 INFO [RS:2;jenkins-hbase4:45745] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:39,095 INFO [RS:2;jenkins-hbase4:45745] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:39,095 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:39,095 INFO [RS:2;jenkins-hbase4:45745] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45745 2023-07-19 21:15:39,096 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:39,096 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41399,1689801335868 2023-07-19 21:15:39,096 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:39,097 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:39,097 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45745,1689801336029 2023-07-19 21:15:39,099 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41399,1689801335868] 2023-07-19 21:15:39,099 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41399,1689801335868; numProcessing=2 2023-07-19 21:15:39,102 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41399,1689801335868 already deleted, retry=false 2023-07-19 21:15:39,102 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41399,1689801335868 expired; onlineServers=1 2023-07-19 21:15:39,102 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45745,1689801336029] 2023-07-19 21:15:39,102 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45745,1689801336029; numProcessing=3 2023-07-19 21:15:39,103 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45745,1689801336029 already deleted, retry=false 2023-07-19 21:15:39,103 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45745,1689801336029 expired; onlineServers=0 2023-07-19 21:15:39,103 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45995,1689801335521' ***** 2023-07-19 21:15:39,103 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 21:15:39,103 DEBUG [M:0;jenkins-hbase4:45995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@196cf3dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:39,103 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:39,105 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:39,105 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:39,105 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:39,105 INFO [M:0;jenkins-hbase4:45995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@362229ca{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:39,106 INFO [M:0;jenkins-hbase4:45995] server.AbstractConnector(383): Stopped ServerConnector@1670d5cc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:39,106 INFO [M:0;jenkins-hbase4:45995] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:39,106 INFO [M:0;jenkins-hbase4:45995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d3661ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:39,106 INFO [M:0;jenkins-hbase4:45995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b4d4f4c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:39,107 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45995,1689801335521 2023-07-19 21:15:39,107 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45995,1689801335521; all regions closed. 2023-07-19 21:15:39,107 DEBUG [M:0;jenkins-hbase4:45995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:39,107 INFO [M:0;jenkins-hbase4:45995] master.HMaster(1491): Stopping master jetty server 2023-07-19 21:15:39,107 INFO [M:0;jenkins-hbase4:45995] server.AbstractConnector(383): Stopped ServerConnector@7143040{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:39,108 DEBUG [M:0;jenkins-hbase4:45995] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 21:15:39,108 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 21:15:39,108 DEBUG [M:0;jenkins-hbase4:45995] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 21:15:39,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801336390] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801336390,5,FailOnTimeoutGroup] 2023-07-19 21:15:39,108 INFO [M:0;jenkins-hbase4:45995] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 21:15:39,109 INFO [M:0;jenkins-hbase4:45995] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 21:15:39,109 INFO [M:0;jenkins-hbase4:45995] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:39,109 DEBUG [M:0;jenkins-hbase4:45995] master.HMaster(1512): Stopping service threads 2023-07-19 21:15:39,109 INFO [M:0;jenkins-hbase4:45995] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 21:15:39,110 ERROR [M:0;jenkins-hbase4:45995] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 21:15:39,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801336395] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801336395,5,FailOnTimeoutGroup] 2023-07-19 21:15:39,111 INFO [M:0;jenkins-hbase4:45995] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 21:15:39,111 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 21:15:39,111 DEBUG [M:0;jenkins-hbase4:45995] zookeeper.ZKUtil(398): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 21:15:39,111 WARN [M:0;jenkins-hbase4:45995] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 21:15:39,111 INFO [M:0;jenkins-hbase4:45995] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 21:15:39,112 INFO [M:0;jenkins-hbase4:45995] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 21:15:39,112 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:39,112 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:39,112 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:39,112 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:39,112 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:39,112 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-19 21:15:39,124 INFO [M:0;jenkins-hbase4:45995] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b718fd4605674f70baf34327fdbcf556 2023-07-19 21:15:39,129 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b718fd4605674f70baf34327fdbcf556 as hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b718fd4605674f70baf34327fdbcf556 2023-07-19 21:15:39,135 INFO [M:0;jenkins-hbase4:45995] regionserver.HStore(1080): Added hdfs://localhost:45035/user/jenkins/test-data/56bc28af-c36b-b239-11f3-979f2c432455/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b718fd4605674f70baf34327fdbcf556, entries=24, sequenceid=194, filesize=12.4 K 2023-07-19 21:15:39,135 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95234, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=194, compaction requested=false 2023-07-19 21:15:39,137 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:39,137 DEBUG [M:0;jenkins-hbase4:45995] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:39,141 INFO [M:0;jenkins-hbase4:45995] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 21:15:39,141 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:39,142 INFO [M:0;jenkins-hbase4:45995] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45995 2023-07-19 21:15:39,144 DEBUG [M:0;jenkins-hbase4:45995] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45995,1689801335521 already deleted, retry=false 2023-07-19 21:15:39,435 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,435 INFO [M:0;jenkins-hbase4:45995] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45995,1689801335521; zookeeper connection closed. 2023-07-19 21:15:39,435 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): master:45995-0x1017f70a5cd0000, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,535 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,535 INFO [RS:2;jenkins-hbase4:45745] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45745,1689801336029; zookeeper connection closed. 2023-07-19 21:15:39,535 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:45745-0x1017f70a5cd0003, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,537 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7e9f4da] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7e9f4da 2023-07-19 21:15:39,636 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,636 INFO [RS:1;jenkins-hbase4:41399] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41399,1689801335868; zookeeper connection closed. 2023-07-19 21:15:39,636 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:41399-0x1017f70a5cd0002, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,636 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@314faa54] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@314faa54 2023-07-19 21:15:39,736 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,736 INFO [RS:0;jenkins-hbase4:44931] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44931,1689801335700; zookeeper connection closed. 2023-07-19 21:15:39,736 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): regionserver:44931-0x1017f70a5cd0001, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:39,736 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2435381c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2435381c 2023-07-19 21:15:39,736 INFO [Listener at localhost/37503] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-19 21:15:39,737 WARN [Listener at localhost/37503] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:39,741 INFO [Listener at localhost/37503] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:39,847 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:39,847 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-9475375-172.31.14.131-1689801334555 (Datanode Uuid 4a32452a-5ca5-4d73-a0dc-ec921fb74b00) service to localhost/127.0.0.1:45035 2023-07-19 21:15:39,848 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data5/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:39,848 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data6/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:39,850 WARN [Listener at localhost/37503] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:39,855 INFO [Listener at localhost/37503] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:39,958 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:39,959 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-9475375-172.31.14.131-1689801334555 (Datanode Uuid 9e22528d-2c35-4aaa-b67e-37ac6e2f08f1) service to localhost/127.0.0.1:45035 2023-07-19 21:15:39,959 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data3/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:39,959 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data4/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:39,961 WARN [Listener at localhost/37503] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:39,965 INFO [Listener at localhost/37503] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:40,068 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:40,068 WARN [BP-9475375-172.31.14.131-1689801334555 heartbeating to localhost/127.0.0.1:45035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-9475375-172.31.14.131-1689801334555 (Datanode Uuid 022266d3-1e36-4865-89e3-f3f1c51d763c) service to localhost/127.0.0.1:45035 2023-07-19 21:15:40,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data1/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:40,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/cluster_0189cbdc-91a3-2155-f8d0-492d75048c47/dfs/data/data2/current/BP-9475375-172.31.14.131-1689801334555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:40,080 INFO [Listener at localhost/37503] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:40,195 INFO [Listener at localhost/37503] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.log.dir so I do NOT create it in target/test-data/3064665a-5c90-916a-9598-e6d697387183 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/747a6116-57eb-2930-7c95-805ca570230b/hadoop.tmp.dir so I do NOT create it in target/test-data/3064665a-5c90-916a-9598-e6d697387183 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d, deleteOnExit=true 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/test.cache.data in system properties and HBase conf 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.tmp.dir in system properties and HBase conf 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir in system properties and HBase conf 2023-07-19 21:15:40,225 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-19 21:15:40,226 DEBUG [Listener at localhost/37503] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-19 21:15:40,226 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/nfs.dump.dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-19 21:15:40,227 INFO [Listener at localhost/37503] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-19 21:15:40,231 WARN [Listener at localhost/37503] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:15:40,231 WARN [Listener at localhost/37503] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:15:40,273 WARN [Listener at localhost/37503] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:40,275 INFO [Listener at localhost/37503] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:40,279 INFO [Listener at localhost/37503] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/Jetty_localhost_44723_hdfs____x0h3he/webapp 2023-07-19 21:15:40,294 DEBUG [Listener at localhost/37503-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017f70a5cd000a, quorum=127.0.0.1:51495, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-19 21:15:40,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017f70a5cd000a, quorum=127.0.0.1:51495, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-19 21:15:40,371 INFO [Listener at localhost/37503] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44723 2023-07-19 21:15:40,375 WARN [Listener at localhost/37503] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-19 21:15:40,375 WARN [Listener at localhost/37503] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-19 21:15:40,469 WARN [Listener at localhost/45117] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:40,493 WARN [Listener at localhost/45117] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:40,497 WARN [Listener at localhost/45117] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:40,498 INFO [Listener at localhost/45117] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:40,502 INFO [Listener at localhost/45117] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/Jetty_localhost_33265_datanode____bw8zg8/webapp 2023-07-19 21:15:40,596 INFO [Listener at localhost/45117] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33265 2023-07-19 21:15:40,603 WARN [Listener at localhost/43091] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:40,619 WARN [Listener at localhost/43091] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:40,621 WARN [Listener at localhost/43091] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:40,622 INFO [Listener at localhost/43091] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:40,626 INFO [Listener at localhost/43091] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/Jetty_localhost_34923_datanode____817kbk/webapp 2023-07-19 21:15:40,730 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3f47dac13a22e4b: Processing first storage report for DS-221a5871-0e09-4857-91ce-7ab34a2e1727 from datanode 8f954bae-a65a-4884-b0bb-b2cf83dcbd48 2023-07-19 21:15:40,730 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3f47dac13a22e4b: from storage DS-221a5871-0e09-4857-91ce-7ab34a2e1727 node DatanodeRegistration(127.0.0.1:41939, datanodeUuid=8f954bae-a65a-4884-b0bb-b2cf83dcbd48, infoPort=34921, infoSecurePort=0, ipcPort=43091, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,730 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3f47dac13a22e4b: Processing first storage report for DS-6f903324-436d-456b-86b8-f288822fb90a from datanode 8f954bae-a65a-4884-b0bb-b2cf83dcbd48 2023-07-19 21:15:40,730 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3f47dac13a22e4b: from storage DS-6f903324-436d-456b-86b8-f288822fb90a node DatanodeRegistration(127.0.0.1:41939, datanodeUuid=8f954bae-a65a-4884-b0bb-b2cf83dcbd48, infoPort=34921, infoSecurePort=0, ipcPort=43091, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,741 INFO [Listener at localhost/43091] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34923 2023-07-19 21:15:40,748 WARN [Listener at localhost/37755] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:40,768 WARN [Listener at localhost/37755] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-19 21:15:40,770 WARN [Listener at localhost/37755] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-19 21:15:40,771 INFO [Listener at localhost/37755] log.Slf4jLog(67): jetty-6.1.26 2023-07-19 21:15:40,774 INFO [Listener at localhost/37755] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/Jetty_localhost_37003_datanode____.bvcoju/webapp 2023-07-19 21:15:40,852 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe7b697a3f68774c7: Processing first storage report for DS-279a5cc0-7977-4429-807b-f81e8d662a5f from datanode 459e8105-47bb-46ac-b99e-3c1bc9d6098a 2023-07-19 21:15:40,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe7b697a3f68774c7: from storage DS-279a5cc0-7977-4429-807b-f81e8d662a5f node DatanodeRegistration(127.0.0.1:33067, datanodeUuid=459e8105-47bb-46ac-b99e-3c1bc9d6098a, infoPort=39573, infoSecurePort=0, ipcPort=37755, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,852 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe7b697a3f68774c7: Processing first storage report for DS-717a50ca-9a67-4ba6-9bc5-3b98d16f4d08 from datanode 459e8105-47bb-46ac-b99e-3c1bc9d6098a 2023-07-19 21:15:40,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe7b697a3f68774c7: from storage DS-717a50ca-9a67-4ba6-9bc5-3b98d16f4d08 node DatanodeRegistration(127.0.0.1:33067, datanodeUuid=459e8105-47bb-46ac-b99e-3c1bc9d6098a, infoPort=39573, infoSecurePort=0, ipcPort=37755, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,873 INFO [Listener at localhost/37755] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37003 2023-07-19 21:15:40,881 WARN [Listener at localhost/43351] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-19 21:15:40,976 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x19245d41a21b0f99: Processing first storage report for DS-b773874d-ea37-4985-bc27-cae2c14534b2 from datanode 63677da5-6232-41cd-a867-77733efa8ee1 2023-07-19 21:15:40,976 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x19245d41a21b0f99: from storage DS-b773874d-ea37-4985-bc27-cae2c14534b2 node DatanodeRegistration(127.0.0.1:34811, datanodeUuid=63677da5-6232-41cd-a867-77733efa8ee1, infoPort=45475, infoSecurePort=0, ipcPort=43351, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,976 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x19245d41a21b0f99: Processing first storage report for DS-44470961-3a26-487a-888a-2ec389f3543e from datanode 63677da5-6232-41cd-a867-77733efa8ee1 2023-07-19 21:15:40,976 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x19245d41a21b0f99: from storage DS-44470961-3a26-487a-888a-2ec389f3543e node DatanodeRegistration(127.0.0.1:34811, datanodeUuid=63677da5-6232-41cd-a867-77733efa8ee1, infoPort=45475, infoSecurePort=0, ipcPort=43351, storageInfo=lv=-57;cid=testClusterID;nsid=776174947;c=1689801340234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-19 21:15:40,993 DEBUG [Listener at localhost/43351] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183 2023-07-19 21:15:40,995 INFO [Listener at localhost/43351] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/zookeeper_0, clientPort=57109, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-19 21:15:40,996 INFO [Listener at localhost/43351] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57109 2023-07-19 21:15:40,996 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:40,997 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,016 INFO [Listener at localhost/43351] util.FSUtils(471): Created version file at hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 with version=8 2023-07-19 21:15:41,016 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40615/user/jenkins/test-data/ba55fa55-8d05-da33-de13-b4aabc939769/hbase-staging 2023-07-19 21:15:41,017 DEBUG [Listener at localhost/43351] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-19 21:15:41,017 DEBUG [Listener at localhost/43351] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-19 21:15:41,017 DEBUG [Listener at localhost/43351] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-19 21:15:41,017 DEBUG [Listener at localhost/43351] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-19 21:15:41,018 INFO [Listener at localhost/43351] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:41,018 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,018 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,019 INFO [Listener at localhost/43351] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:41,019 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,019 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:41,019 INFO [Listener at localhost/43351] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:41,019 INFO [Listener at localhost/43351] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41365 2023-07-19 21:15:41,020 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,021 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,022 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41365 connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:41,033 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:413650x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:41,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41365-0x1017f70bb450000 connected 2023-07-19 21:15:41,055 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:41,055 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:41,056 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:41,058 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41365 2023-07-19 21:15:41,058 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41365 2023-07-19 21:15:41,058 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41365 2023-07-19 21:15:41,059 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41365 2023-07-19 21:15:41,062 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41365 2023-07-19 21:15:41,064 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:41,064 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:41,064 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:41,064 INFO [Listener at localhost/43351] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-19 21:15:41,065 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:41,065 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:41,065 INFO [Listener at localhost/43351] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:41,065 INFO [Listener at localhost/43351] http.HttpServer(1146): Jetty bound to port 43941 2023-07-19 21:15:41,065 INFO [Listener at localhost/43351] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:41,067 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,068 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@534c3da6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:41,068 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,068 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@126fe908{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:41,180 INFO [Listener at localhost/43351] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:41,181 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:41,181 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:41,181 INFO [Listener at localhost/43351] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:41,182 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,183 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@588c102c{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/jetty-0_0_0_0-43941-hbase-server-2_4_18-SNAPSHOT_jar-_-any-351681035193792430/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:41,184 INFO [Listener at localhost/43351] server.AbstractConnector(333): Started ServerConnector@700ffae{HTTP/1.1, (http/1.1)}{0.0.0.0:43941} 2023-07-19 21:15:41,184 INFO [Listener at localhost/43351] server.Server(415): Started @45347ms 2023-07-19 21:15:41,184 INFO [Listener at localhost/43351] master.HMaster(444): hbase.rootdir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5, hbase.cluster.distributed=false 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:41,198 INFO [Listener at localhost/43351] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:41,200 INFO [Listener at localhost/43351] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44851 2023-07-19 21:15:41,200 INFO [Listener at localhost/43351] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:41,201 DEBUG [Listener at localhost/43351] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:41,202 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,203 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,203 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44851 connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:41,208 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:448510x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:41,209 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44851-0x1017f70bb450001 connected 2023-07-19 21:15:41,210 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:41,210 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:41,211 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:41,211 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-19 21:15:41,211 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44851 2023-07-19 21:15:41,212 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44851 2023-07-19 21:15:41,212 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-19 21:15:41,212 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-19 21:15:41,214 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:41,214 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:41,214 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:41,214 INFO [Listener at localhost/43351] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:41,214 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:41,215 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:41,215 INFO [Listener at localhost/43351] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:41,215 INFO [Listener at localhost/43351] http.HttpServer(1146): Jetty bound to port 41683 2023-07-19 21:15:41,215 INFO [Listener at localhost/43351] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:41,219 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,219 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@491b58c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:41,219 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,220 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41c0687c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:41,333 INFO [Listener at localhost/43351] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:41,333 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:41,334 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:41,334 INFO [Listener at localhost/43351] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-19 21:15:41,335 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,336 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@52e4a549{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/jetty-0_0_0_0-41683-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5952895949367564658/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:41,337 INFO [Listener at localhost/43351] server.AbstractConnector(333): Started ServerConnector@69e30e67{HTTP/1.1, (http/1.1)}{0.0.0.0:41683} 2023-07-19 21:15:41,337 INFO [Listener at localhost/43351] server.Server(415): Started @45500ms 2023-07-19 21:15:41,348 INFO [Listener at localhost/43351] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:41,349 INFO [Listener at localhost/43351] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:41,350 INFO [Listener at localhost/43351] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46655 2023-07-19 21:15:41,350 INFO [Listener at localhost/43351] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:41,351 DEBUG [Listener at localhost/43351] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:41,352 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,352 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,353 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46655 connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:41,357 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:466550x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:41,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46655-0x1017f70bb450002 connected 2023-07-19 21:15:41,358 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:41,359 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:41,359 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:41,360 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46655 2023-07-19 21:15:41,360 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46655 2023-07-19 21:15:41,362 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46655 2023-07-19 21:15:41,363 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46655 2023-07-19 21:15:41,363 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46655 2023-07-19 21:15:41,365 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:41,365 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:41,365 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:41,366 INFO [Listener at localhost/43351] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:41,366 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:41,366 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:41,366 INFO [Listener at localhost/43351] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:41,366 INFO [Listener at localhost/43351] http.HttpServer(1146): Jetty bound to port 42951 2023-07-19 21:15:41,367 INFO [Listener at localhost/43351] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:41,371 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,371 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@553c89d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:41,371 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,371 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2df383ac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:41,484 INFO [Listener at localhost/43351] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:41,485 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:41,485 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:41,485 INFO [Listener at localhost/43351] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:41,486 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,487 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@744bd545{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/jetty-0_0_0_0-42951-hbase-server-2_4_18-SNAPSHOT_jar-_-any-609087328885739993/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:41,489 INFO [Listener at localhost/43351] server.AbstractConnector(333): Started ServerConnector@2e0aae4b{HTTP/1.1, (http/1.1)}{0.0.0.0:42951} 2023-07-19 21:15:41,489 INFO [Listener at localhost/43351] server.Server(415): Started @45652ms 2023-07-19 21:15:41,500 INFO [Listener at localhost/43351] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:41,500 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,501 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,501 INFO [Listener at localhost/43351] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:41,501 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:41,501 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:41,501 INFO [Listener at localhost/43351] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:41,502 INFO [Listener at localhost/43351] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44179 2023-07-19 21:15:41,502 INFO [Listener at localhost/43351] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:41,503 DEBUG [Listener at localhost/43351] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:41,503 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,504 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,505 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44179 connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:41,508 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:441790x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:41,509 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:441790x0, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:41,510 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44179-0x1017f70bb450003 connected 2023-07-19 21:15:41,510 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:41,511 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:41,511 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-19 21:15:41,511 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44179 2023-07-19 21:15:41,512 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44179 2023-07-19 21:15:41,515 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-19 21:15:41,515 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-19 21:15:41,517 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:41,517 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:41,517 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:41,518 INFO [Listener at localhost/43351] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:41,518 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:41,518 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:41,518 INFO [Listener at localhost/43351] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:41,519 INFO [Listener at localhost/43351] http.HttpServer(1146): Jetty bound to port 39395 2023-07-19 21:15:41,519 INFO [Listener at localhost/43351] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:41,526 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,526 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@706df11{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:41,527 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,527 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2778ad29{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:41,640 INFO [Listener at localhost/43351] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:41,641 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:41,641 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:41,641 INFO [Listener at localhost/43351] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:41,642 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:41,643 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f5b1399{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/jetty-0_0_0_0-39395-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1732521608464558840/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:41,644 INFO [Listener at localhost/43351] server.AbstractConnector(333): Started ServerConnector@60c861b8{HTTP/1.1, (http/1.1)}{0.0.0.0:39395} 2023-07-19 21:15:41,645 INFO [Listener at localhost/43351] server.Server(415): Started @45807ms 2023-07-19 21:15:41,645 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:41,645 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 21:15:41,645 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 21:15:41,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:41,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@410280d6{HTTP/1.1, (http/1.1)}{0.0.0.0:42907} 2023-07-19 21:15:41,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @45813ms 2023-07-19 21:15:41,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,651 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:41,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,654 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:41,654 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:41,654 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:41,654 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:41,654 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:41,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41365,1689801341018 from backup master directory 2023-07-19 21:15:41,657 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:41,658 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,658 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-19 21:15:41,658 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:41,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/hbase.id with ID: 54797bf0-53fd-4af4-972a-cf3b4dd9d5c4 2023-07-19 21:15:41,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:41,692 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,702 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2d2c10c4 to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:41,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@370d7855, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:41,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:41,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-19 21:15:41,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:41,712 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store-tmp 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:41,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:41,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:41,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:41,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/WALs/jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41365%2C1689801341018, suffix=, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/WALs/jenkins-hbase4.apache.org,41365,1689801341018, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/oldWALs, maxLogs=10 2023-07-19 21:15:41,740 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:41,740 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:41,740 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:41,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/WALs/jenkins-hbase4.apache.org,41365,1689801341018/jenkins-hbase4.apache.org%2C41365%2C1689801341018.1689801341724 2023-07-19 21:15:41,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK], DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK], DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK]] 2023-07-19 21:15:41,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:41,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:41,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,750 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,751 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-19 21:15:41,752 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-19 21:15:41,753 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:41,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-19 21:15:41,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:41,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10500589760, jitterRate=-0.022056370973587036}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:41,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:41,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-19 21:15:41,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-19 21:15:41,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-19 21:15:41,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-19 21:15:41,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-19 21:15:41,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-19 21:15:41,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-19 21:15:41,774 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,775 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-19 21:15:41,775 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-19 21:15:41,776 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-19 21:15:41,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:41,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:41,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:41,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:41,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41365,1689801341018, sessionid=0x1017f70bb450000, setting cluster-up flag (Was=false) 2023-07-19 21:15:41,782 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-19 21:15:41,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,791 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:41,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-19 21:15:41,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:41,797 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.hbase-snapshot/.tmp 2023-07-19 21:15:41,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-19 21:15:41,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-19 21:15:41,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-19 21:15:41,799 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:41,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-19 21:15:41,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:41,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:41,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:41,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-19 21:15:41,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:41,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689801371813 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-19 21:15:41,813 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:41,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,813 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-19 21:15:41,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-19 21:15:41,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-19 21:15:41,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-19 21:15:41,815 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:41,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801341815,5,FailOnTimeoutGroup] 2023-07-19 21:15:41,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801341815,5,FailOnTimeoutGroup] 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,828 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:41,828 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:41,828 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 2023-07-19 21:15:41,837 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:41,838 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:41,840 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/info 2023-07-19 21:15:41,840 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:41,841 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:41,841 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:41,842 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:41,842 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:41,843 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:41,843 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:41,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/table 2023-07-19 21:15:41,844 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:41,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:41,845 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740 2023-07-19 21:15:41,846 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740 2023-07-19 21:15:41,847 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(951): ClusterId : 54797bf0-53fd-4af4-972a-cf3b4dd9d5c4 2023-07-19 21:15:41,847 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(951): ClusterId : 54797bf0-53fd-4af4-972a-cf3b4dd9d5c4 2023-07-19 21:15:41,848 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:41,850 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:41,848 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(951): ClusterId : 54797bf0-53fd-4af4-972a-cf3b4dd9d5c4 2023-07-19 21:15:41,850 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:41,851 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:41,852 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:41,857 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:41,857 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:41,857 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:41,857 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:41,857 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:41,857 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:41,858 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:41,859 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11006656160, jitterRate=0.025074735283851624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:41,859 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:41,859 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:41,859 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:41,859 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:41,859 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:41,859 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:41,859 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:41,860 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:41,860 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:41,861 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ReadOnlyZKClient(139): Connect 0x7b28b37b to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:41,861 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:41,861 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:41,862 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ReadOnlyZKClient(139): Connect 0x5c368893 to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:41,862 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-19 21:15:41,863 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-19 21:15:41,868 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ReadOnlyZKClient(139): Connect 0x24fe32ce to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:41,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-19 21:15:41,871 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-19 21:15:41,874 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-19 21:15:41,876 DEBUG [RS:0;jenkins-hbase4:44851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75ef426e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:41,876 DEBUG [RS:2;jenkins-hbase4:44179] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a39c821, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:41,876 DEBUG [RS:0;jenkins-hbase4:44851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71d0d80b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:41,876 DEBUG [RS:1;jenkins-hbase4:46655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c7fdcc2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:41,876 DEBUG [RS:2;jenkins-hbase4:44179] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e0659be, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:41,876 DEBUG [RS:1;jenkins-hbase4:46655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6da01527, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:41,884 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44851 2023-07-19 21:15:41,884 INFO [RS:0;jenkins-hbase4:44851] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:41,885 INFO [RS:0;jenkins-hbase4:44851] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:41,885 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:41,885 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41365,1689801341018 with isa=jenkins-hbase4.apache.org/172.31.14.131:44851, startcode=1689801341197 2023-07-19 21:15:41,885 DEBUG [RS:0;jenkins-hbase4:44851] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:41,887 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46655 2023-07-19 21:15:41,887 INFO [RS:1;jenkins-hbase4:46655] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:41,887 INFO [RS:1;jenkins-hbase4:46655] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:41,887 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53455, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:41,887 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:41,889 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,889 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:41,890 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-19 21:15:41,890 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41365,1689801341018 with isa=jenkins-hbase4.apache.org/172.31.14.131:46655, startcode=1689801341348 2023-07-19 21:15:41,890 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 2023-07-19 21:15:41,890 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44179 2023-07-19 21:15:41,890 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45117 2023-07-19 21:15:41,890 INFO [RS:2;jenkins-hbase4:44179] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:41,890 INFO [RS:2;jenkins-hbase4:44179] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:41,890 DEBUG [RS:1;jenkins-hbase4:46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:41,890 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:41,890 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43941 2023-07-19 21:15:41,891 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41365,1689801341018 with isa=jenkins-hbase4.apache.org/172.31.14.131:44179, startcode=1689801341500 2023-07-19 21:15:41,891 DEBUG [RS:2;jenkins-hbase4:44179] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:41,892 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:41,892 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,892 WARN [RS:0;jenkins-hbase4:44851] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:41,892 INFO [RS:0;jenkins-hbase4:44851] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:41,893 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,895 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44851,1689801341197] 2023-07-19 21:15:41,897 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:41,897 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60461, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:41,898 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,898 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:41,899 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-19 21:15:41,899 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,899 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:41,899 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-19 21:15:41,899 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 2023-07-19 21:15:41,899 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45117 2023-07-19 21:15:41,899 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43941 2023-07-19 21:15:41,899 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 2023-07-19 21:15:41,899 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45117 2023-07-19 21:15:41,899 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43941 2023-07-19 21:15:41,905 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:41,907 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,907 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,907 WARN [RS:2;jenkins-hbase4:44179] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:41,907 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46655,1689801341348] 2023-07-19 21:15:41,907 INFO [RS:2;jenkins-hbase4:44179] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:41,907 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44179,1689801341500] 2023-07-19 21:15:41,907 WARN [RS:1;jenkins-hbase4:46655] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:41,907 INFO [RS:1;jenkins-hbase4:46655] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:41,908 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,908 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,908 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,909 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,909 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,910 DEBUG [RS:0;jenkins-hbase4:44851] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:41,910 INFO [RS:0;jenkins-hbase4:44851] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:41,915 INFO [RS:0;jenkins-hbase4:44851] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:41,915 INFO [RS:0;jenkins-hbase4:44851] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:41,918 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,919 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:41,922 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,922 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:41,923 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,923 DEBUG [RS:0;jenkins-hbase4:44851] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,923 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,924 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,924 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,924 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:41,924 INFO [RS:2;jenkins-hbase4:44179] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:41,925 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:41,925 INFO [RS:1;jenkins-hbase4:46655] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:41,927 INFO [RS:1;jenkins-hbase4:46655] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:41,927 INFO [RS:2;jenkins-hbase4:44179] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:41,927 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,927 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,927 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,927 INFO [RS:1;jenkins-hbase4:46655] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:41,927 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,928 INFO [RS:2;jenkins-hbase4:44179] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:41,928 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,928 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:41,929 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:41,930 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:41,930 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,930 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:1;jenkins-hbase4:46655] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,931 DEBUG [RS:2;jenkins-hbase4:44179] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:41,938 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,938 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,938 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,940 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,940 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,940 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,944 INFO [RS:0;jenkins-hbase4:44851] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:41,945 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44851,1689801341197-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,954 INFO [RS:2;jenkins-hbase4:44179] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:41,954 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44179,1689801341500-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,955 INFO [RS:0;jenkins-hbase4:44851] regionserver.Replication(203): jenkins-hbase4.apache.org,44851,1689801341197 started 2023-07-19 21:15:41,955 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44851,1689801341197, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44851, sessionid=0x1017f70bb450001 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44851,1689801341197' 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:41,955 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:41,956 DEBUG [RS:0;jenkins-hbase4:44851] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:41,956 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44851,1689801341197' 2023-07-19 21:15:41,956 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:41,956 DEBUG [RS:0;jenkins-hbase4:44851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:41,956 INFO [RS:1;jenkins-hbase4:46655] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:41,956 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46655,1689801341348-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:41,956 DEBUG [RS:0;jenkins-hbase4:44851] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:41,956 INFO [RS:0;jenkins-hbase4:44851] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:41,956 INFO [RS:0;jenkins-hbase4:44851] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:41,965 INFO [RS:2;jenkins-hbase4:44179] regionserver.Replication(203): jenkins-hbase4.apache.org,44179,1689801341500 started 2023-07-19 21:15:41,965 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44179,1689801341500, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44179, sessionid=0x1017f70bb450003 2023-07-19 21:15:41,965 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:41,965 DEBUG [RS:2;jenkins-hbase4:44179] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,965 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44179,1689801341500' 2023-07-19 21:15:41,965 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:41,965 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44179,1689801341500' 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:41,966 DEBUG [RS:2;jenkins-hbase4:44179] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:41,966 INFO [RS:2;jenkins-hbase4:44179] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:41,966 INFO [RS:2;jenkins-hbase4:44179] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:41,976 INFO [RS:1;jenkins-hbase4:46655] regionserver.Replication(203): jenkins-hbase4.apache.org,46655,1689801341348 started 2023-07-19 21:15:41,976 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46655,1689801341348, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46655, sessionid=0x1017f70bb450002 2023-07-19 21:15:41,976 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:41,976 DEBUG [RS:1;jenkins-hbase4:46655] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,976 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46655,1689801341348' 2023-07-19 21:15:41,976 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:41,976 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46655,1689801341348' 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:41,977 DEBUG [RS:1;jenkins-hbase4:46655] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:41,977 INFO [RS:1;jenkins-hbase4:46655] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:41,978 INFO [RS:1;jenkins-hbase4:46655] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:42,025 DEBUG [jenkins-hbase4:41365] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:42,026 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44179,1689801341500, state=OPENING 2023-07-19 21:15:42,028 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-19 21:15:42,029 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:42,030 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:42,030 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44179,1689801341500}] 2023-07-19 21:15:42,058 INFO [RS:0;jenkins-hbase4:44851] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44851%2C1689801341197, suffix=, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44851,1689801341197, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs, maxLogs=32 2023-07-19 21:15:42,068 INFO [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44179%2C1689801341500, suffix=, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44179,1689801341500, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs, maxLogs=32 2023-07-19 21:15:42,075 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:42,076 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:42,076 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:42,079 INFO [RS:1;jenkins-hbase4:46655] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46655%2C1689801341348, suffix=, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,46655,1689801341348, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs, maxLogs=32 2023-07-19 21:15:42,083 INFO [RS:0;jenkins-hbase4:44851] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44851,1689801341197/jenkins-hbase4.apache.org%2C44851%2C1689801341197.1689801342058 2023-07-19 21:15:42,083 DEBUG [RS:0;jenkins-hbase4:44851] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK], DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK]] 2023-07-19 21:15:42,088 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:42,088 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:42,089 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:42,095 INFO [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44179,1689801341500/jenkins-hbase4.apache.org%2C44179%2C1689801341500.1689801342068 2023-07-19 21:15:42,097 DEBUG [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK], DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK]] 2023-07-19 21:15:42,101 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:42,101 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:42,101 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:42,103 INFO [RS:1;jenkins-hbase4:46655] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,46655,1689801341348/jenkins-hbase4.apache.org%2C46655%2C1689801341348.1689801342079 2023-07-19 21:15:42,106 DEBUG [RS:1;jenkins-hbase4:46655] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK], DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK], DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK]] 2023-07-19 21:15:42,108 WARN [ReadOnlyZKClient-127.0.0.1:57109@0x2d2c10c4] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-19 21:15:42,108 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:42,109 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51024, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:42,109 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44179] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51024 deadline: 1689801402109, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:42,184 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:42,186 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:42,188 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:42,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-19 21:15:42,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:42,193 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44179%2C1689801341500.meta, suffix=.meta, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44179,1689801341500, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs, maxLogs=32 2023-07-19 21:15:42,219 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:42,220 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:42,220 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:42,222 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,44179,1689801341500/jenkins-hbase4.apache.org%2C44179%2C1689801341500.meta.1689801342193.meta 2023-07-19 21:15:42,222 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK], DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK], DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK]] 2023-07-19 21:15:42,222 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-19 21:15:42,223 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-19 21:15:42,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-19 21:15:42,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-19 21:15:42,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/info 2023-07-19 21:15:42,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/info 2023-07-19 21:15:42,226 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-19 21:15:42,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:42,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-19 21:15:42,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:42,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/rep_barrier 2023-07-19 21:15:42,228 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-19 21:15:42,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:42,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-19 21:15:42,229 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/table 2023-07-19 21:15:42,229 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/table 2023-07-19 21:15:42,229 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-19 21:15:42,230 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:42,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740 2023-07-19 21:15:42,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740 2023-07-19 21:15:42,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-19 21:15:42,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-19 21:15:42,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11189754240, jitterRate=0.04212707281112671}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-19 21:15:42,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-19 21:15:42,236 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689801342184 2023-07-19 21:15:42,240 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-19 21:15:42,241 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-19 21:15:42,241 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44179,1689801341500, state=OPEN 2023-07-19 21:15:42,244 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-19 21:15:42,244 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-19 21:15:42,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-19 21:15:42,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44179,1689801341500 in 214 msec 2023-07-19 21:15:42,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-19 21:15:42,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 377 msec 2023-07-19 21:15:42,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 448 msec 2023-07-19 21:15:42,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689801342248, completionTime=-1 2023-07-19 21:15:42,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-19 21:15:42,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-19 21:15:42,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-19 21:15:42,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689801402253 2023-07-19 21:15:42,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689801462253 2023-07-19 21:15:42,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-19 21:15:42,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41365,1689801341018-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41365,1689801341018-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41365,1689801341018-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41365, period=300000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-19 21:15:42,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:42,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-19 21:15:42,260 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-19 21:15:42,262 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:42,263 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:42,265 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,265 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a empty. 2023-07-19 21:15:42,266 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,266 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-19 21:15:42,278 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:42,279 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ac4679162104aa91ac4bdddf31746f5a, NAME => 'hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ac4679162104aa91ac4bdddf31746f5a, disabling compactions & flushes 2023-07-19 21:15:42,289 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. after waiting 0 ms 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,289 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,289 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ac4679162104aa91ac4bdddf31746f5a: 2023-07-19 21:15:42,292 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:42,293 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801342293"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801342293"}]},"ts":"1689801342293"} 2023-07-19 21:15:42,296 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:42,296 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:42,297 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801342296"}]},"ts":"1689801342296"} 2023-07-19 21:15:42,298 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-19 21:15:42,302 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:42,302 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:42,302 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:42,302 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:42,302 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:42,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac4679162104aa91ac4bdddf31746f5a, ASSIGN}] 2023-07-19 21:15:42,305 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac4679162104aa91ac4bdddf31746f5a, ASSIGN 2023-07-19 21:15:42,305 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ac4679162104aa91ac4bdddf31746f5a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44179,1689801341500; forceNewPlan=false, retain=false 2023-07-19 21:15:42,412 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:42,413 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-19 21:15:42,415 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:42,416 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:42,417 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,418 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3 empty. 2023-07-19 21:15:42,418 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,418 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-19 21:15:42,430 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:42,431 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 79ffd2ec6e374dad66dbd8a0c62361e3, NAME => 'hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 79ffd2ec6e374dad66dbd8a0c62361e3, disabling compactions & flushes 2023-07-19 21:15:42,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. after waiting 0 ms 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 79ffd2ec6e374dad66dbd8a0c62361e3: 2023-07-19 21:15:42,441 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:42,442 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801342442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801342442"}]},"ts":"1689801342442"} 2023-07-19 21:15:42,443 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:42,444 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:42,444 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801342444"}]},"ts":"1689801342444"} 2023-07-19 21:15:42,445 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-19 21:15:42,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:42,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:42,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:42,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:42,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:42,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79ffd2ec6e374dad66dbd8a0c62361e3, ASSIGN}] 2023-07-19 21:15:42,449 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79ffd2ec6e374dad66dbd8a0c62361e3, ASSIGN 2023-07-19 21:15:42,449 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=79ffd2ec6e374dad66dbd8a0c62361e3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46655,1689801341348; forceNewPlan=false, retain=false 2023-07-19 21:15:42,450 INFO [jenkins-hbase4:41365] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-19 21:15:42,451 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac4679162104aa91ac4bdddf31746f5a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:42,452 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801342451"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801342451"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801342451"}]},"ts":"1689801342451"} 2023-07-19 21:15:42,452 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79ffd2ec6e374dad66dbd8a0c62361e3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:42,452 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801342452"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801342452"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801342452"}]},"ts":"1689801342452"} 2023-07-19 21:15:42,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure ac4679162104aa91ac4bdddf31746f5a, server=jenkins-hbase4.apache.org,44179,1689801341500}] 2023-07-19 21:15:42,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 79ffd2ec6e374dad66dbd8a0c62361e3, server=jenkins-hbase4.apache.org,46655,1689801341348}] 2023-07-19 21:15:42,606 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:42,606 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:42,608 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:42,609 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac4679162104aa91ac4bdddf31746f5a, NAME => 'hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:42,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:42,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,611 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 79ffd2ec6e374dad66dbd8a0c62361e3, NAME => 'hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:42,612 INFO [StoreOpener-ac4679162104aa91ac4bdddf31746f5a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. service=MultiRowMutationService 2023-07-19 21:15:42,612 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,614 DEBUG [StoreOpener-ac4679162104aa91ac4bdddf31746f5a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/info 2023-07-19 21:15:42,614 DEBUG [StoreOpener-ac4679162104aa91ac4bdddf31746f5a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/info 2023-07-19 21:15:42,614 INFO [StoreOpener-79ffd2ec6e374dad66dbd8a0c62361e3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,614 INFO [StoreOpener-ac4679162104aa91ac4bdddf31746f5a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac4679162104aa91ac4bdddf31746f5a columnFamilyName info 2023-07-19 21:15:42,615 INFO [StoreOpener-ac4679162104aa91ac4bdddf31746f5a-1] regionserver.HStore(310): Store=ac4679162104aa91ac4bdddf31746f5a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:42,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,622 DEBUG [StoreOpener-79ffd2ec6e374dad66dbd8a0c62361e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/m 2023-07-19 21:15:42,622 DEBUG [StoreOpener-79ffd2ec6e374dad66dbd8a0c62361e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/m 2023-07-19 21:15:42,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,623 INFO [StoreOpener-79ffd2ec6e374dad66dbd8a0c62361e3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 79ffd2ec6e374dad66dbd8a0c62361e3 columnFamilyName m 2023-07-19 21:15:42,624 INFO [StoreOpener-79ffd2ec6e374dad66dbd8a0c62361e3-1] regionserver.HStore(310): Store=79ffd2ec6e374dad66dbd8a0c62361e3/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:42,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,627 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:42,632 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:42,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ac4679162104aa91ac4bdddf31746f5a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11339458560, jitterRate=0.056069374084472656}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:42,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ac4679162104aa91ac4bdddf31746f5a: 2023-07-19 21:15:42,633 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a., pid=8, masterSystemTime=1689801342605 2023-07-19 21:15:42,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:42,637 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac4679162104aa91ac4bdddf31746f5a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:42,637 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689801342637"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801342637"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801342637"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801342637"}]},"ts":"1689801342637"} 2023-07-19 21:15:42,641 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-19 21:15:42,641 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure ac4679162104aa91ac4bdddf31746f5a, server=jenkins-hbase4.apache.org,44179,1689801341500 in 186 msec 2023-07-19 21:15:42,644 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-19 21:15:42,644 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ac4679162104aa91ac4bdddf31746f5a, ASSIGN in 339 msec 2023-07-19 21:15:42,645 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:42,646 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801342645"}]},"ts":"1689801342645"} 2023-07-19 21:15:42,647 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-19 21:15:42,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:42,650 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:42,652 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 391 msec 2023-07-19 21:15:42,655 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:42,656 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 79ffd2ec6e374dad66dbd8a0c62361e3; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@14857f2a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:42,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 79ffd2ec6e374dad66dbd8a0c62361e3: 2023-07-19 21:15:42,656 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3., pid=9, masterSystemTime=1689801342606 2023-07-19 21:15:42,659 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,660 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:42,660 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79ffd2ec6e374dad66dbd8a0c62361e3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:42,660 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689801342660"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801342660"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801342660"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801342660"}]},"ts":"1689801342660"} 2023-07-19 21:15:42,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-19 21:15:42,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-19 21:15:42,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 79ffd2ec6e374dad66dbd8a0c62361e3, server=jenkins-hbase4.apache.org,46655,1689801341348 in 208 msec 2023-07-19 21:15:42,663 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:42,664 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:42,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-19 21:15:42,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=79ffd2ec6e374dad66dbd8a0c62361e3, ASSIGN in 215 msec 2023-07-19 21:15:42,666 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:42,666 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801342666"}]},"ts":"1689801342666"} 2023-07-19 21:15:42,667 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-19 21:15:42,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-19 21:15:42,671 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:42,672 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 259 msec 2023-07-19 21:15:42,674 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:42,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-19 21:15:42,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-19 21:15:42,686 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:42,690 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-19 21:15:42,704 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-19 21:15:42,707 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.048sec 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41365,1689801341018-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-19 21:15:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41365,1689801341018-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-19 21:15:42,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-19 21:15:42,710 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-19 21:15:42,722 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:42,725 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50564, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:42,727 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-19 21:15:42,727 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-19 21:15:42,733 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:42,733 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:42,736 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:42,739 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-19 21:15:42,748 DEBUG [Listener at localhost/43351] zookeeper.ReadOnlyZKClient(139): Connect 0x2051be4e to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:42,755 DEBUG [Listener at localhost/43351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@788676c3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:42,759 DEBUG [hconnection-0x1fa67591-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:42,765 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51046, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:42,766 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:42,767 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:42,770 DEBUG [Listener at localhost/43351] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-19 21:15:42,773 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38172, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-19 21:15:42,776 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-19 21:15:42,777 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:42,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-19 21:15:42,778 DEBUG [Listener at localhost/43351] zookeeper.ReadOnlyZKClient(139): Connect 0x6ac8a2ed to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:42,789 DEBUG [Listener at localhost/43351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46ee48d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:42,789 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:42,797 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:42,803 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017f70bb45000a connected 2023-07-19 21:15:42,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:42,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:42,815 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-19 21:15:42,852 INFO [Listener at localhost/43351] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-19 21:15:42,853 INFO [Listener at localhost/43351] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-19 21:15:42,857 INFO [Listener at localhost/43351] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33189 2023-07-19 21:15:42,857 INFO [Listener at localhost/43351] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-19 21:15:42,862 DEBUG [Listener at localhost/43351] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-19 21:15:42,863 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:42,864 INFO [Listener at localhost/43351] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-19 21:15:42,866 INFO [Listener at localhost/43351] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33189 connecting to ZooKeeper ensemble=127.0.0.1:57109 2023-07-19 21:15:42,883 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:331890x0, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-19 21:15:42,885 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(162): regionserver:331890x0, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-19 21:15:42,886 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(162): regionserver:331890x0, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-19 21:15:42,888 DEBUG [Listener at localhost/43351] zookeeper.ZKUtil(164): regionserver:331890x0, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-19 21:15:42,912 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33189 2023-07-19 21:15:42,915 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33189 2023-07-19 21:15:42,926 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33189 2023-07-19 21:15:42,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33189-0x1017f70bb45000b connected 2023-07-19 21:15:42,930 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33189 2023-07-19 21:15:42,930 DEBUG [Listener at localhost/43351] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33189 2023-07-19 21:15:42,933 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-19 21:15:42,933 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-19 21:15:42,933 INFO [Listener at localhost/43351] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-19 21:15:42,934 INFO [Listener at localhost/43351] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-19 21:15:42,934 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-19 21:15:42,934 INFO [Listener at localhost/43351] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-19 21:15:42,934 INFO [Listener at localhost/43351] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-19 21:15:42,935 INFO [Listener at localhost/43351] http.HttpServer(1146): Jetty bound to port 41215 2023-07-19 21:15:42,935 INFO [Listener at localhost/43351] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-19 21:15:42,949 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:42,950 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d31db36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,AVAILABLE} 2023-07-19 21:15:42,950 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:42,951 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7158aaaa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-19 21:15:43,092 INFO [Listener at localhost/43351] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-19 21:15:43,092 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-19 21:15:43,093 INFO [Listener at localhost/43351] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-19 21:15:43,093 INFO [Listener at localhost/43351] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-19 21:15:43,094 INFO [Listener at localhost/43351] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-19 21:15:43,095 INFO [Listener at localhost/43351] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@cbcd600{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/java.io.tmpdir/jetty-0_0_0_0-41215-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3886911666488918693/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:43,097 INFO [Listener at localhost/43351] server.AbstractConnector(333): Started ServerConnector@4babb12e{HTTP/1.1, (http/1.1)}{0.0.0.0:41215} 2023-07-19 21:15:43,097 INFO [Listener at localhost/43351] server.Server(415): Started @47260ms 2023-07-19 21:15:43,099 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(951): ClusterId : 54797bf0-53fd-4af4-972a-cf3b4dd9d5c4 2023-07-19 21:15:43,099 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-19 21:15:43,101 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-19 21:15:43,101 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-19 21:15:43,104 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-19 21:15:43,108 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ReadOnlyZKClient(139): Connect 0x50f79f06 to 127.0.0.1:57109 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-19 21:15:43,112 DEBUG [RS:3;jenkins-hbase4:33189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e307b68, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-19 21:15:43,112 DEBUG [RS:3;jenkins-hbase4:33189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a0ff476, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:43,121 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33189 2023-07-19 21:15:43,121 INFO [RS:3;jenkins-hbase4:33189] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-19 21:15:43,121 INFO [RS:3;jenkins-hbase4:33189] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-19 21:15:43,121 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1022): About to register with Master. 2023-07-19 21:15:43,122 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41365,1689801341018 with isa=jenkins-hbase4.apache.org/172.31.14.131:33189, startcode=1689801342851 2023-07-19 21:15:43,122 DEBUG [RS:3;jenkins-hbase4:33189] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-19 21:15:43,125 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53723, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-19 21:15:43,125 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,125 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-19 21:15:43,125 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5 2023-07-19 21:15:43,125 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45117 2023-07-19 21:15:43,125 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43941 2023-07-19 21:15:43,129 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:43,129 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:43,129 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:43,129 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:43,129 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:43,130 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ZKUtil(162): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,130 WARN [RS:3;jenkins-hbase4:33189] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-19 21:15:43,130 INFO [RS:3;jenkins-hbase4:33189] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-19 21:15:43,130 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33189,1689801342851] 2023-07-19 21:15:43,130 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-19 21:15:43,130 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:43,134 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-19 21:15:43,135 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:43,135 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:43,135 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,136 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:43,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:43,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:43,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,137 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ZKUtil(162): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,138 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ZKUtil(162): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:43,138 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ZKUtil(162): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:43,138 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ZKUtil(162): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,139 DEBUG [RS:3;jenkins-hbase4:33189] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-19 21:15:43,139 INFO [RS:3;jenkins-hbase4:33189] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-19 21:15:43,140 INFO [RS:3;jenkins-hbase4:33189] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-19 21:15:43,141 INFO [RS:3;jenkins-hbase4:33189] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-19 21:15:43,141 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,141 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-19 21:15:43,142 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,144 DEBUG [RS:3;jenkins-hbase4:33189] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-19 21:15:43,145 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,145 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,145 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,157 INFO [RS:3;jenkins-hbase4:33189] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-19 21:15:43,157 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33189,1689801342851-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-19 21:15:43,170 INFO [RS:3;jenkins-hbase4:33189] regionserver.Replication(203): jenkins-hbase4.apache.org,33189,1689801342851 started 2023-07-19 21:15:43,170 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33189,1689801342851, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33189, sessionid=0x1017f70bb45000b 2023-07-19 21:15:43,170 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-19 21:15:43,170 DEBUG [RS:3;jenkins-hbase4:33189] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,170 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33189,1689801342851' 2023-07-19 21:15:43,170 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-19 21:15:43,170 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-19 21:15:43,171 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-19 21:15:43,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:43,171 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-19 21:15:43,171 DEBUG [RS:3;jenkins-hbase4:33189] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:43,171 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33189,1689801342851' 2023-07-19 21:15:43,171 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-19 21:15:43,172 DEBUG [RS:3;jenkins-hbase4:33189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-19 21:15:43,172 DEBUG [RS:3;jenkins-hbase4:33189] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-19 21:15:43,172 INFO [RS:3;jenkins-hbase4:33189] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-19 21:15:43,172 INFO [RS:3;jenkins-hbase4:33189] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-19 21:15:43,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:43,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:43,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:43,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:43,179 DEBUG [hconnection-0x26105268-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:43,180 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:43,186 DEBUG [hconnection-0x26105268-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-19 21:15:43,187 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50580, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-19 21:15:43,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:43,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:43,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:43,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:43,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38172 deadline: 1689802543191, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:43,192 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:43,193 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:43,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:43,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:43,194 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:43,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:43,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:43,243 INFO [Listener at localhost/43351] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 525) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1338337118-2381 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:44851Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2d2c10c4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51495@0x408dab3e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1744508475-2640 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-400614e4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@6b9de3ad java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x50f79f06 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1884557439@qtp-909943503-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34923 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:45117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/43351.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 37755 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@332b35bb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData-prefix:jenkins-hbase4.apache.org,41365,1689801341018 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51495@0x408dab3e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_235579936_17 at /127.0.0.1:47622 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-613301798_17 at /127.0.0.1:47634 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x6ac8a2ed-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5-prefix:jenkins-hbase4.apache.org,44179,1689801341500.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data3/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x24fe32ce sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-613301798_17 at /127.0.0.1:39064 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5-prefix:jenkins-hbase4.apache.org,44851,1689801341197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:47632 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x49e53547-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 319509370@qtp-1972570490-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37003 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51495@0x408dab3e-SendThread(127.0.0.1:51495) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45035 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2092803584-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:39068 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43091 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x5c368893-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3228760a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2125943965-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(274417873) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1744508475-2645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2137877501@qtp-86979788-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@669b11ed java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2125943965-2272 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49e53547-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3f2312c6-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:2;jenkins-hbase4:44179-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1338337118-2375 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45035 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57109 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 45117 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1744508475-2641-acceptor-0@7fbc7ba4-ServerConnector@4babb12e{HTTP/1.1, (http/1.1)}{0.0.0.0:41215} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 43091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x6ac8a2ed-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7e07ebc6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x7b28b37b-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-15d54c43-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37503-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2033033786-2368 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:39048 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_235579936_17 at /127.0.0.1:39046 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 45117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x7b28b37b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x26105268-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44179 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45995,1689801335521 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x26105268-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x24fe32ce-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:45117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1338337118-2380 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-613301798_17 at /127.0.0.1:39014 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data1/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801341815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1338337118-2377 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x49e53547-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1338337118-2376 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2033033786-2370 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 37755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:45035 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:41365 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:58686 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-25f1c03b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2d2c10c4-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 919478019@qtp-1256845940-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33265 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1727656733@qtp-1972570490-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41365,1689801341018 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 2132374262@qtp-909943503-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5f983009 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33189 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5-prefix:jenkins-hbase4.apache.org,44179,1689801341500 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:47648 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1744508475-2646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2125943965-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@67584994[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2054921609-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2033033786-2364-acceptor-0@7ec3f215-ServerConnector@60c861b8{HTTP/1.1, (http/1.1)}{0.0.0.0:39395} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:57109): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS:0;jenkins-hbase4:44851-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46655 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x1fa67591-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2092803584-2340 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45035 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data6/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2125943965-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2125943965-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x5c368893-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:44179Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1973673132_17 at /127.0.0.1:39028 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2092803584-2335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2304-acceptor-0@21bc1666-ServerConnector@69e30e67{HTTP/1.1, (http/1.1)}{0.0.0.0:41683} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2125943965-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37503-SendThread(127.0.0.1:51495) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x50f79f06-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2092803584-2336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:44851 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data5/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 43351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2054921609-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43351.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2303 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_235579936_17 at /127.0.0.1:58670 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-613301798_17 at /127.0.0.1:58696 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45035 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2033033786-2367 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data2/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2092803584-2338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data4/current/BP-521510616-172.31.14.131-1689801340234 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 37755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1338337118-2379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:58614 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@79deb063 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43351.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: jenkins-hbase4:41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 37755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-571-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2033033786-2369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@4e967fc9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49e53547-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x24fe32ce-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2051be4e-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:45117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2033033786-2365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3a2622e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1338337118-2374 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43351-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@32d6558a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x7b28b37b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x6ac8a2ed sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2125943965-2273-acceptor-0@315482f8-ServerConnector@700ffae{HTTP/1.1, (http/1.1)}{0.0.0.0:43941} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1338337118-2378-acceptor-0@6c8af0cb-ServerConnector@410280d6{HTTP/1.1, (http/1.1)}{0.0.0.0:42907} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 43351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 43351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:46655Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2033033786-2363 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_790674973_17 at /127.0.0.1:58708 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 43351 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:33189Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1744508475-2644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1973673132_17 at /127.0.0.1:47594 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@ad1864d java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2092803584-2339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49e53547-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@46ba2c47 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6226ced2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1744508475-2643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2051be4e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1973673132_17 at /127.0.0.1:58646 [Receiving block BP-521510616-172.31.14.131-1689801340234:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 45117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2125943965-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1d537e4b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1744508475-2647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46655-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x49e53547-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2033033786-2366 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2051be4e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33189 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 45117 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x49e53547-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2092803584-2334-acceptor-0@23329262-ServerConnector@2e0aae4b{HTTP/1.1, (http/1.1)}{0.0.0.0:42951} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2092803584-2333 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x50f79f06-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801341815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RS:3;jenkins-hbase4:33189-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45035 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5-prefix:jenkins-hbase4.apache.org,46655,1689801341348 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x2d2c10c4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (710212339) connection to localhost/127.0.0.1:45117 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1744508475-2642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-521510616-172.31.14.131-1689801340234:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49e53547-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 43091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1987527070@qtp-86979788-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44723 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 1971758697@qtp-1256845940-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/43351-SendThread(127.0.0.1:57109) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@193338dd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57109@0x5c368893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1805701911.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46655 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1e6fd7ab[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2054921609-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=836 (was 827) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 332), ProcessCount=174 (was 174), AvailableMemoryMB=4582 (was 4703) 2023-07-19 21:15:43,246 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-19 21:15:43,265 INFO [Listener at localhost/43351] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=560, OpenFileDescriptor=836, MaxFileDescriptor=60000, SystemLoadAverage=319, ProcessCount=174, AvailableMemoryMB=4581 2023-07-19 21:15:43,265 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-19 21:15:43,265 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-19 21:15:43,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:43,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:43,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:43,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:43,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:43,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:43,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:43,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:43,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:43,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:43,274 INFO [RS:3;jenkins-hbase4:33189] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33189%2C1689801342851, suffix=, logDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,33189,1689801342851, archiveDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs, maxLogs=32 2023-07-19 21:15:43,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:43,278 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:43,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:43,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:43,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:43,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:43,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:43,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:43,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:43,292 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK] 2023-07-19 21:15:43,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:43,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:43,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38172 deadline: 1689802543293, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:43,296 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK] 2023-07-19 21:15:43,296 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK] 2023-07-19 21:15:43,296 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:43,297 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:43,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:43,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:43,298 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:43,298 INFO [RS:3;jenkins-hbase4:33189] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/WALs/jenkins-hbase4.apache.org,33189,1689801342851/jenkins-hbase4.apache.org%2C33189%2C1689801342851.1689801343274 2023-07-19 21:15:43,299 DEBUG [RS:3;jenkins-hbase4:33189] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41939,DS-221a5871-0e09-4857-91ce-7ab34a2e1727,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-b773874d-ea37-4985-bc27-cae2c14534b2,DISK], DatanodeInfoWithStorage[127.0.0.1:33067,DS-279a5cc0-7977-4429-807b-f81e8d662a5f,DISK]] 2023-07-19 21:15:43,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:43,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:43,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:43,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 21:15:43,303 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:43,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-19 21:15:43,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 21:15:43,304 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:43,305 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:43,305 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:43,307 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-19 21:15:43,308 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,309 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea empty. 2023-07-19 21:15:43,309 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,309 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 21:15:43,322 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-19 21:15:43,323 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => a7d80be8de6cdb32c3a01bb9ad362fea, NAME => 't1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing a7d80be8de6cdb32c3a01bb9ad362fea, disabling compactions & flushes 2023-07-19 21:15:43,330 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. after waiting 0 ms 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,330 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,330 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for a7d80be8de6cdb32c3a01bb9ad362fea: 2023-07-19 21:15:43,332 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-19 21:15:43,333 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801343333"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801343333"}]},"ts":"1689801343333"} 2023-07-19 21:15:43,334 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-19 21:15:43,335 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-19 21:15:43,335 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801343335"}]},"ts":"1689801343335"} 2023-07-19 21:15:43,336 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-19 21:15:43,342 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-19 21:15:43,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, ASSIGN}] 2023-07-19 21:15:43,343 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, ASSIGN 2023-07-19 21:15:43,344 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44851,1689801341197; forceNewPlan=false, retain=false 2023-07-19 21:15:43,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 21:15:43,494 INFO [jenkins-hbase4:41365] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-19 21:15:43,496 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a7d80be8de6cdb32c3a01bb9ad362fea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,496 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801343496"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801343496"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801343496"}]},"ts":"1689801343496"} 2023-07-19 21:15:43,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure a7d80be8de6cdb32c3a01bb9ad362fea, server=jenkins-hbase4.apache.org,44851,1689801341197}] 2023-07-19 21:15:43,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 21:15:43,650 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,651 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-19 21:15:43,659 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-19 21:15:43,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a7d80be8de6cdb32c3a01bb9ad362fea, NAME => 't1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.', STARTKEY => '', ENDKEY => ''} 2023-07-19 21:15:43,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-19 21:15:43,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,666 INFO [StoreOpener-a7d80be8de6cdb32c3a01bb9ad362fea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,668 DEBUG [StoreOpener-a7d80be8de6cdb32c3a01bb9ad362fea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/cf1 2023-07-19 21:15:43,668 DEBUG [StoreOpener-a7d80be8de6cdb32c3a01bb9ad362fea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/cf1 2023-07-19 21:15:43,668 INFO [StoreOpener-a7d80be8de6cdb32c3a01bb9ad362fea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a7d80be8de6cdb32c3a01bb9ad362fea columnFamilyName cf1 2023-07-19 21:15:43,669 INFO [StoreOpener-a7d80be8de6cdb32c3a01bb9ad362fea-1] regionserver.HStore(310): Store=a7d80be8de6cdb32c3a01bb9ad362fea/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-19 21:15:43,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:43,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-19 21:15:43,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a7d80be8de6cdb32c3a01bb9ad362fea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12035242720, jitterRate=0.12086932361125946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-19 21:15:43,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a7d80be8de6cdb32c3a01bb9ad362fea: 2023-07-19 21:15:43,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea., pid=14, masterSystemTime=1689801343650 2023-07-19 21:15:43,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,683 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a7d80be8de6cdb32c3a01bb9ad362fea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:43,683 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801343682"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689801343682"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689801343682"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689801343682"}]},"ts":"1689801343682"} 2023-07-19 21:15:43,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:43,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-19 21:15:43,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure a7d80be8de6cdb32c3a01bb9ad362fea, server=jenkins-hbase4.apache.org,44851,1689801341197 in 188 msec 2023-07-19 21:15:43,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-19 21:15:43,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, ASSIGN in 345 msec 2023-07-19 21:15:43,689 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-19 21:15:43,689 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801343689"}]},"ts":"1689801343689"} 2023-07-19 21:15:43,690 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-19 21:15:43,692 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-19 21:15:43,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 392 msec 2023-07-19 21:15:43,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-19 21:15:43,906 INFO [Listener at localhost/43351] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-19 21:15:43,906 DEBUG [Listener at localhost/43351] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-19 21:15:43,907 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:43,909 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-19 21:15:43,909 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:43,909 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-19 21:15:43,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-19 21:15:43,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-19 21:15:43,914 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-19 21:15:43,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-19 21:15:43,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:38172 deadline: 1689801403911, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-19 21:15:43,917 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:43,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-19 21:15:44,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,018 INFO [Listener at localhost/43351] client.HBaseAdmin$15(890): Started disable of t1 2023-07-19 21:15:44,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-19 21:15:44,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-19 21:15:44,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 21:15:44,026 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801344025"}]},"ts":"1689801344025"} 2023-07-19 21:15:44,027 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-19 21:15:44,029 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-19 21:15:44,029 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, UNASSIGN}] 2023-07-19 21:15:44,030 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, UNASSIGN 2023-07-19 21:15:44,031 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a7d80be8de6cdb32c3a01bb9ad362fea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:44,031 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801344031"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689801344031"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689801344031"}]},"ts":"1689801344031"} 2023-07-19 21:15:44,033 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure a7d80be8de6cdb32c3a01bb9ad362fea, server=jenkins-hbase4.apache.org,44851,1689801341197}] 2023-07-19 21:15:44,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 21:15:44,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:44,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a7d80be8de6cdb32c3a01bb9ad362fea, disabling compactions & flushes 2023-07-19 21:15:44,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:44,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:44,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. after waiting 0 ms 2023-07-19 21:15:44,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:44,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-19 21:15:44,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea. 2023-07-19 21:15:44,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a7d80be8de6cdb32c3a01bb9ad362fea: 2023-07-19 21:15:44,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:44,191 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a7d80be8de6cdb32c3a01bb9ad362fea, regionState=CLOSED 2023-07-19 21:15:44,192 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689801344191"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689801344191"}]},"ts":"1689801344191"} 2023-07-19 21:15:44,194 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-19 21:15:44,194 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure a7d80be8de6cdb32c3a01bb9ad362fea, server=jenkins-hbase4.apache.org,44851,1689801341197 in 161 msec 2023-07-19 21:15:44,196 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-19 21:15:44,196 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=a7d80be8de6cdb32c3a01bb9ad362fea, UNASSIGN in 165 msec 2023-07-19 21:15:44,196 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689801344196"}]},"ts":"1689801344196"} 2023-07-19 21:15:44,197 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-19 21:15:44,199 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-19 21:15:44,200 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 180 msec 2023-07-19 21:15:44,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-19 21:15:44,328 INFO [Listener at localhost/43351] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-19 21:15:44,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-19 21:15:44,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-19 21:15:44,331 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 21:15:44,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-19 21:15:44,332 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-19 21:15:44,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,335 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:44,337 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/cf1, FileablePath, hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/recovered.edits] 2023-07-19 21:15:44,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 21:15:44,342 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/recovered.edits/4.seqid to hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/archive/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea/recovered.edits/4.seqid 2023-07-19 21:15:44,343 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/.tmp/data/default/t1/a7d80be8de6cdb32c3a01bb9ad362fea 2023-07-19 21:15:44,343 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-19 21:15:44,345 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-19 21:15:44,347 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-19 21:15:44,348 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-19 21:15:44,349 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-19 21:15:44,349 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-19 21:15:44,349 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689801344349"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:44,351 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-19 21:15:44,351 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a7d80be8de6cdb32c3a01bb9ad362fea, NAME => 't1,,1689801343300.a7d80be8de6cdb32c3a01bb9ad362fea.', STARTKEY => '', ENDKEY => ''}] 2023-07-19 21:15:44,351 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-19 21:15:44,351 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689801344351"}]},"ts":"9223372036854775807"} 2023-07-19 21:15:44,352 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-19 21:15:44,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-19 21:15:44,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-19 21:15:44,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-19 21:15:44,439 INFO [Listener at localhost/43351] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-19 21:15:44,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,455 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:38172 deadline: 1689802544465, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,466 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,469 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,470 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,489 INFO [Listener at localhost/43351] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 560) - Thread LEAK? -, OpenFileDescriptor=849 (was 836) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 319), ProcessCount=174 (was 174), AvailableMemoryMB=4573 (was 4581) 2023-07-19 21:15:44,489 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-19 21:15:44,508 INFO [Listener at localhost/43351] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=319, ProcessCount=174, AvailableMemoryMB=4573 2023-07-19 21:15:44,508 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-19 21:15:44,508 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-19 21:15:44,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,520 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802544530, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,530 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,532 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,549 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 21:15:44,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:44,551 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-19 21:15:44,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-19 21:15:44,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-19 21:15:44,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,569 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802544578, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,579 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,581 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,581 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,603 INFO [Listener at localhost/43351] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 319), ProcessCount=174 (was 174), AvailableMemoryMB=4572 (was 4573) 2023-07-19 21:15:44,603 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-19 21:15:44,625 INFO [Listener at localhost/43351] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=319, ProcessCount=174, AvailableMemoryMB=4572 2023-07-19 21:15:44,625 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-19 21:15:44,625 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-19 21:15:44,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,639 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802544648, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,648 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,650 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,652 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,668 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802544676, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,676 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,678 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,679 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,699 INFO [Listener at localhost/43351] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 319), ProcessCount=174 (was 174), AvailableMemoryMB=4572 (was 4572) 2023-07-19 21:15:44,699 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-19 21:15:44,716 INFO [Listener at localhost/43351] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=319, ProcessCount=174, AvailableMemoryMB=4571 2023-07-19 21:15:44,716 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-19 21:15:44,716 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-19 21:15:44,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:44,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:44,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:44,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:44,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:44,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:44,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:44,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:44,728 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:44,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:44,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:44,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:44,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802544738, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:44,739 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:44,740 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,741 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:44,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:44,742 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-19 21:15:44,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-19 21:15:44,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 21:15:44,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-19 21:15:44,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 21:15:44,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 21:15:44,760 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:44,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-19 21:15:44,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-19 21:15:44,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 21:15:44,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:44,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:38172 deadline: 1689802544857, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-19 21:15:44,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-19 21:15:44,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 21:15:44,879 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 21:15:44,880 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-19 21:15:44,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-19 21:15:44,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-19 21:15:44,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 21:15:44,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:44,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-19 21:15:44,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:44,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-19 21:15:44,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:44,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:44,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:44,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-19 21:15:44,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,994 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,996 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 21:15:44,998 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:44,999 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-19 21:15:44,999 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-19 21:15:44,999 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:45,001 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-19 21:15:45,002 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-19 21:15:45,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-19 21:15:45,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-19 21:15:45,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-19 21:15:45,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:45,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:45,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-19 21:15:45,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:45,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:45,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:45,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:45,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:38172 deadline: 1689801405109, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-19 21:15:45,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:45,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:45,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:45,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:45,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:45,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:45,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:45,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-19 21:15:45,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:45,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:45,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-19 21:15:45,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:45,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-19 21:15:45,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-19 21:15:45,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-19 21:15:45,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-19 21:15:45,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-19 21:15:45,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-19 21:15:45,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:45,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-19 21:15:45,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-19 21:15:45,126 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-19 21:15:45,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-19 21:15:45,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-19 21:15:45,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-19 21:15:45,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-19 21:15:45,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-19 21:15:45,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:45,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:45,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41365] to rsgroup master 2023-07-19 21:15:45,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-19 21:15:45,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:38172 deadline: 1689802545135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. 2023-07-19 21:15:45,136 WARN [Listener at localhost/43351] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-19 21:15:45,137 INFO [Listener at localhost/43351] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-19 21:15:45,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-19 21:15:45,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-19 21:15:45,138 INFO [Listener at localhost/43351] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33189, jenkins-hbase4.apache.org:44179, jenkins-hbase4.apache.org:44851, jenkins-hbase4.apache.org:46655], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-19 21:15:45,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-19 21:15:45,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-19 21:15:45,159 INFO [Listener at localhost/43351] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 319), ProcessCount=174 (was 174), AvailableMemoryMB=4570 (was 4571) 2023-07-19 21:15:45,159 WARN [Listener at localhost/43351] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-19 21:15:45,159 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-19 21:15:45,159 INFO [Listener at localhost/43351] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2051be4e to 127.0.0.1:57109 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] util.JVMClusterUtil(257): Found active master hash=546235364, stopped=false 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-19 21:15:45,159 DEBUG [Listener at localhost/43351] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-19 21:15:45,159 INFO [Listener at localhost/43351] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:45,161 INFO [Listener at localhost/43351] procedure2.ProcedureExecutor(629): Stopping 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:45,161 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-19 21:15:45,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:45,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:45,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:45,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:45,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-19 21:15:45,162 DEBUG [Listener at localhost/43351] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2d2c10c4 to 127.0.0.1:57109 2023-07-19 21:15:45,163 DEBUG [Listener at localhost/43351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44851,1689801341197' ***** 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46655,1689801341348' ***** 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44179,1689801341500' ***** 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:45,163 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:45,163 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33189,1689801342851' ***** 2023-07-19 21:15:45,163 INFO [Listener at localhost/43351] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-19 21:15:45,163 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:45,163 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:45,169 INFO [RS:0;jenkins-hbase4:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@52e4a549{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:45,169 INFO [RS:3;jenkins-hbase4:33189] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@cbcd600{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:45,170 INFO [RS:2;jenkins-hbase4:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f5b1399{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:45,170 INFO [RS:1;jenkins-hbase4:46655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@744bd545{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-19 21:15:45,170 INFO [RS:3;jenkins-hbase4:33189] server.AbstractConnector(383): Stopped ServerConnector@4babb12e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,170 INFO [RS:0;jenkins-hbase4:44851] server.AbstractConnector(383): Stopped ServerConnector@69e30e67{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,171 INFO [RS:3;jenkins-hbase4:33189] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:45,171 INFO [RS:1;jenkins-hbase4:46655] server.AbstractConnector(383): Stopped ServerConnector@2e0aae4b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,171 INFO [RS:2;jenkins-hbase4:44179] server.AbstractConnector(383): Stopped ServerConnector@60c861b8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,171 INFO [RS:1;jenkins-hbase4:46655] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:45,171 INFO [RS:3;jenkins-hbase4:33189] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7158aaaa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:45,171 INFO [RS:0;jenkins-hbase4:44851] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:45,173 INFO [RS:3;jenkins-hbase4:33189] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d31db36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:45,171 INFO [RS:2;jenkins-hbase4:44179] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:45,173 INFO [RS:0;jenkins-hbase4:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41c0687c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:45,173 INFO [RS:1;jenkins-hbase4:46655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2df383ac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:45,175 INFO [RS:0;jenkins-hbase4:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@491b58c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:45,175 INFO [RS:1;jenkins-hbase4:46655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@553c89d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:45,174 INFO [RS:2;jenkins-hbase4:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2778ad29{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:45,176 INFO [RS:3;jenkins-hbase4:33189] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:45,176 INFO [RS:2;jenkins-hbase4:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@706df11{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:45,177 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:45,177 INFO [RS:3;jenkins-hbase4:33189] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:45,177 INFO [RS:3;jenkins-hbase4:33189] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:45,177 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:45,177 DEBUG [RS:3;jenkins-hbase4:33189] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50f79f06 to 127.0.0.1:57109 2023-07-19 21:15:45,177 DEBUG [RS:3;jenkins-hbase4:33189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,177 INFO [RS:2;jenkins-hbase4:44179] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:45,177 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33189,1689801342851; all regions closed. 2023-07-19 21:15:45,177 INFO [RS:2;jenkins-hbase4:44179] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:45,177 INFO [RS:2;jenkins-hbase4:44179] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:45,177 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(3305): Received CLOSE for ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:45,177 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:45,179 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:45,179 INFO [RS:1;jenkins-hbase4:46655] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:45,179 DEBUG [RS:2;jenkins-hbase4:44179] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24fe32ce to 127.0.0.1:57109 2023-07-19 21:15:45,179 DEBUG [RS:2;jenkins-hbase4:44179] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ac4679162104aa91ac4bdddf31746f5a, disabling compactions & flushes 2023-07-19 21:15:45,179 INFO [RS:0;jenkins-hbase4:44851] regionserver.HeapMemoryManager(220): Stopping 2023-07-19 21:15:45,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:45,180 INFO [RS:0;jenkins-hbase4:44851] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:45,180 INFO [RS:0;jenkins-hbase4:44851] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:45,180 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:45,180 DEBUG [RS:0;jenkins-hbase4:44851] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b28b37b to 127.0.0.1:57109 2023-07-19 21:15:45,180 DEBUG [RS:0;jenkins-hbase4:44851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,180 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44851,1689801341197; all regions closed. 2023-07-19 21:15:45,179 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:45,179 INFO [RS:1;jenkins-hbase4:46655] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-19 21:15:45,179 INFO [RS:2;jenkins-hbase4:44179] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:45,180 INFO [RS:2;jenkins-hbase4:44179] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:45,180 INFO [RS:2;jenkins-hbase4:44179] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:45,180 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-19 21:15:45,180 INFO [RS:1;jenkins-hbase4:46655] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-19 21:15:45,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:45,180 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-19 21:15:45,181 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-19 21:15:45,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. after waiting 0 ms 2023-07-19 21:15:45,181 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(3305): Received CLOSE for 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:45,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:45,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-19 21:15:45,181 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-19 21:15:45,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-19 21:15:45,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-19 21:15:45,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-19 21:15:45,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-19 21:15:45,181 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1478): Online Regions={ac4679162104aa91ac4bdddf31746f5a=hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a., 1588230740=hbase:meta,,1.1588230740} 2023-07-19 21:15:45,182 DEBUG [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1504): Waiting on 1588230740, ac4679162104aa91ac4bdddf31746f5a 2023-07-19 21:15:45,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ac4679162104aa91ac4bdddf31746f5a 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-19 21:15:45,182 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:45,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 79ffd2ec6e374dad66dbd8a0c62361e3, disabling compactions & flushes 2023-07-19 21:15:45,183 DEBUG [RS:1;jenkins-hbase4:46655] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c368893 to 127.0.0.1:57109 2023-07-19 21:15:45,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:45,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:45,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. after waiting 0 ms 2023-07-19 21:15:45,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:45,183 DEBUG [RS:1;jenkins-hbase4:46655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 79ffd2ec6e374dad66dbd8a0c62361e3 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-19 21:15:45,183 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-19 21:15:45,183 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1478): Online Regions={79ffd2ec6e374dad66dbd8a0c62361e3=hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3.} 2023-07-19 21:15:45,183 DEBUG [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1504): Waiting on 79ffd2ec6e374dad66dbd8a0c62361e3 2023-07-19 21:15:45,187 DEBUG [RS:3;jenkins-hbase4:33189] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs 2023-07-19 21:15:45,187 INFO [RS:3;jenkins-hbase4:33189] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33189%2C1689801342851:(num 1689801343274) 2023-07-19 21:15:45,187 DEBUG [RS:3;jenkins-hbase4:33189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,187 INFO [RS:3;jenkins-hbase4:33189] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,187 INFO [RS:3;jenkins-hbase4:33189] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:45,188 INFO [RS:3;jenkins-hbase4:33189] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:45,188 INFO [RS:3;jenkins-hbase4:33189] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:45,188 INFO [RS:3;jenkins-hbase4:33189] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:45,188 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:45,189 INFO [RS:3;jenkins-hbase4:33189] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33189 2023-07-19 21:15:45,189 DEBUG [RS:0;jenkins-hbase4:44851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs 2023-07-19 21:15:45,190 INFO [RS:0;jenkins-hbase4:44851] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44851%2C1689801341197:(num 1689801342058) 2023-07-19 21:15:45,190 DEBUG [RS:0;jenkins-hbase4:44851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,190 INFO [RS:0;jenkins-hbase4:44851] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,191 INFO [RS:0;jenkins-hbase4:44851] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:45,191 INFO [RS:0;jenkins-hbase4:44851] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:45,191 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:45,191 INFO [RS:0;jenkins-hbase4:44851] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:45,191 INFO [RS:0;jenkins-hbase4:44851] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:45,193 INFO [RS:0;jenkins-hbase4:44851] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44851 2023-07-19 21:15:45,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/.tmp/m/99409bafb3b84253ab95eb912f8c2460 2023-07-19 21:15:45,214 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/info/852f3aa48f9648799e1490d640673bf8 2023-07-19 21:15:45,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/.tmp/info/7131bff4928e4059839716e2d896c8a7 2023-07-19 21:15:45,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 99409bafb3b84253ab95eb912f8c2460 2023-07-19 21:15:45,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/.tmp/m/99409bafb3b84253ab95eb912f8c2460 as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/m/99409bafb3b84253ab95eb912f8c2460 2023-07-19 21:15:45,222 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 852f3aa48f9648799e1490d640673bf8 2023-07-19 21:15:45,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7131bff4928e4059839716e2d896c8a7 2023-07-19 21:15:45,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/.tmp/info/7131bff4928e4059839716e2d896c8a7 as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/info/7131bff4928e4059839716e2d896c8a7 2023-07-19 21:15:45,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 99409bafb3b84253ab95eb912f8c2460 2023-07-19 21:15:45,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/m/99409bafb3b84253ab95eb912f8c2460, entries=12, sequenceid=29, filesize=5.4 K 2023-07-19 21:15:45,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 79ffd2ec6e374dad66dbd8a0c62361e3 in 44ms, sequenceid=29, compaction requested=false 2023-07-19 21:15:45,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7131bff4928e4059839716e2d896c8a7 2023-07-19 21:15:45,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/info/7131bff4928e4059839716e2d896c8a7, entries=3, sequenceid=9, filesize=5.0 K 2023-07-19 21:15:45,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for ac4679162104aa91ac4bdddf31746f5a in 50ms, sequenceid=9, compaction requested=false 2023-07-19 21:15:45,233 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/rsgroup/79ffd2ec6e374dad66dbd8a0c62361e3/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-19 21:15:45,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/rep_barrier/92f1f106c3f14f2a9b0a0bb1ae1f15cf 2023-07-19 21:15:45,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:45,238 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:45,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 79ffd2ec6e374dad66dbd8a0c62361e3: 2023-07-19 21:15:45,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689801342411.79ffd2ec6e374dad66dbd8a0c62361e3. 2023-07-19 21:15:45,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/namespace/ac4679162104aa91ac4bdddf31746f5a/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-19 21:15:45,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:45,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ac4679162104aa91ac4bdddf31746f5a: 2023-07-19 21:15:45,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689801342259.ac4679162104aa91ac4bdddf31746f5a. 2023-07-19 21:15:45,242 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92f1f106c3f14f2a9b0a0bb1ae1f15cf 2023-07-19 21:15:45,244 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,253 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,254 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,257 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/table/1fc47e914da347269bd4d442740c502e 2023-07-19 21:15:45,262 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1fc47e914da347269bd4d442740c502e 2023-07-19 21:15:45,263 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/info/852f3aa48f9648799e1490d640673bf8 as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/info/852f3aa48f9648799e1490d640673bf8 2023-07-19 21:15:45,268 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 852f3aa48f9648799e1490d640673bf8 2023-07-19 21:15:45,268 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/info/852f3aa48f9648799e1490d640673bf8, entries=22, sequenceid=26, filesize=7.3 K 2023-07-19 21:15:45,269 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/rep_barrier/92f1f106c3f14f2a9b0a0bb1ae1f15cf as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/rep_barrier/92f1f106c3f14f2a9b0a0bb1ae1f15cf 2023-07-19 21:15:45,274 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92f1f106c3f14f2a9b0a0bb1ae1f15cf 2023-07-19 21:15:45,274 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/rep_barrier/92f1f106c3f14f2a9b0a0bb1ae1f15cf, entries=1, sequenceid=26, filesize=4.9 K 2023-07-19 21:15:45,275 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/.tmp/table/1fc47e914da347269bd4d442740c502e as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/table/1fc47e914da347269bd4d442740c502e 2023-07-19 21:15:45,280 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1fc47e914da347269bd4d442740c502e 2023-07-19 21:15:45,280 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/table/1fc47e914da347269bd4d442740c502e, entries=6, sequenceid=26, filesize=5.1 K 2023-07-19 21:15:45,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 99ms, sequenceid=26, compaction requested=false 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:45,285 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,286 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44851,1689801341197 2023-07-19 21:15:45,286 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:45,286 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:45,286 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,287 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:45,287 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33189,1689801342851 2023-07-19 21:15:45,297 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-19 21:15:45,298 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-19 21:15:45,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:45,300 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-19 21:15:45,300 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-19 21:15:45,382 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44179,1689801341500; all regions closed. 2023-07-19 21:15:45,383 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33189,1689801342851] 2023-07-19 21:15:45,383 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33189,1689801342851; numProcessing=1 2023-07-19 21:15:45,383 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46655,1689801341348; all regions closed. 2023-07-19 21:15:45,386 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33189,1689801342851 already deleted, retry=false 2023-07-19 21:15:45,386 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33189,1689801342851 expired; onlineServers=3 2023-07-19 21:15:45,386 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44851,1689801341197] 2023-07-19 21:15:45,386 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44851,1689801341197; numProcessing=2 2023-07-19 21:15:45,388 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44851,1689801341197 already deleted, retry=false 2023-07-19 21:15:45,388 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44851,1689801341197 expired; onlineServers=2 2023-07-19 21:15:45,391 DEBUG [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs 2023-07-19 21:15:45,391 INFO [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44179%2C1689801341500.meta:.meta(num 1689801342193) 2023-07-19 21:15:45,391 DEBUG [RS:1;jenkins-hbase4:46655] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs 2023-07-19 21:15:45,391 INFO [RS:1;jenkins-hbase4:46655] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46655%2C1689801341348:(num 1689801342079) 2023-07-19 21:15:45,391 DEBUG [RS:1;jenkins-hbase4:46655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,391 INFO [RS:1;jenkins-hbase4:46655] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,392 INFO [RS:1;jenkins-hbase4:46655] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:45,392 INFO [RS:1;jenkins-hbase4:46655] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-19 21:15:45,392 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:45,392 INFO [RS:1;jenkins-hbase4:46655] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-19 21:15:45,392 INFO [RS:1;jenkins-hbase4:46655] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-19 21:15:45,394 INFO [RS:1;jenkins-hbase4:46655] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46655 2023-07-19 21:15:45,397 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,397 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:45,397 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46655,1689801341348 2023-07-19 21:15:45,399 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46655,1689801341348] 2023-07-19 21:15:45,399 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46655,1689801341348; numProcessing=3 2023-07-19 21:15:45,399 DEBUG [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/oldWALs 2023-07-19 21:15:45,399 INFO [RS:2;jenkins-hbase4:44179] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44179%2C1689801341500:(num 1689801342068) 2023-07-19 21:15:45,399 DEBUG [RS:2;jenkins-hbase4:44179] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,399 INFO [RS:2;jenkins-hbase4:44179] regionserver.LeaseManager(133): Closed leases 2023-07-19 21:15:45,400 INFO [RS:2;jenkins-hbase4:44179] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-19 21:15:45,400 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:45,401 INFO [RS:2;jenkins-hbase4:44179] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44179 2023-07-19 21:15:45,401 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46655,1689801341348 already deleted, retry=false 2023-07-19 21:15:45,401 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46655,1689801341348 expired; onlineServers=1 2023-07-19 21:15:45,402 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44179,1689801341500 2023-07-19 21:15:45,402 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-19 21:15:45,403 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44179,1689801341500] 2023-07-19 21:15:45,403 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44179,1689801341500; numProcessing=4 2023-07-19 21:15:45,405 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44179,1689801341500 already deleted, retry=false 2023-07-19 21:15:45,405 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44179,1689801341500 expired; onlineServers=0 2023-07-19 21:15:45,405 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41365,1689801341018' ***** 2023-07-19 21:15:45,405 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-19 21:15:45,406 DEBUG [M:0;jenkins-hbase4:41365] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24b1732d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-19 21:15:45,406 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-19 21:15:45,408 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-19 21:15:45,408 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-19 21:15:45,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-19 21:15:45,408 INFO [M:0;jenkins-hbase4:41365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@588c102c{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-19 21:15:45,409 INFO [M:0;jenkins-hbase4:41365] server.AbstractConnector(383): Stopped ServerConnector@700ffae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,409 INFO [M:0;jenkins-hbase4:41365] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-19 21:15:45,410 INFO [M:0;jenkins-hbase4:41365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@126fe908{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-19 21:15:45,410 INFO [M:0;jenkins-hbase4:41365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@534c3da6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/hadoop.log.dir/,STOPPED} 2023-07-19 21:15:45,411 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41365,1689801341018 2023-07-19 21:15:45,411 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41365,1689801341018; all regions closed. 2023-07-19 21:15:45,411 DEBUG [M:0;jenkins-hbase4:41365] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-19 21:15:45,411 INFO [M:0;jenkins-hbase4:41365] master.HMaster(1491): Stopping master jetty server 2023-07-19 21:15:45,411 INFO [M:0;jenkins-hbase4:41365] server.AbstractConnector(383): Stopped ServerConnector@410280d6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-19 21:15:45,412 DEBUG [M:0;jenkins-hbase4:41365] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-19 21:15:45,412 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-19 21:15:45,412 DEBUG [M:0;jenkins-hbase4:41365] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-19 21:15:45,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801341815] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689801341815,5,FailOnTimeoutGroup] 2023-07-19 21:15:45,412 INFO [M:0;jenkins-hbase4:41365] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-19 21:15:45,412 INFO [M:0;jenkins-hbase4:41365] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-19 21:15:45,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801341815] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689801341815,5,FailOnTimeoutGroup] 2023-07-19 21:15:45,412 INFO [M:0;jenkins-hbase4:41365] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-19 21:15:45,412 DEBUG [M:0;jenkins-hbase4:41365] master.HMaster(1512): Stopping service threads 2023-07-19 21:15:45,412 INFO [M:0;jenkins-hbase4:41365] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-19 21:15:45,412 ERROR [M:0;jenkins-hbase4:41365] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-19 21:15:45,413 INFO [M:0;jenkins-hbase4:41365] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-19 21:15:45,413 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-19 21:15:45,413 DEBUG [M:0;jenkins-hbase4:41365] zookeeper.ZKUtil(398): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-19 21:15:45,413 WARN [M:0;jenkins-hbase4:41365] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-19 21:15:45,413 INFO [M:0;jenkins-hbase4:41365] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-19 21:15:45,413 INFO [M:0;jenkins-hbase4:41365] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-19 21:15:45,413 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-19 21:15:45,413 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:45,413 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:45,413 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-19 21:15:45,413 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:45,413 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-19 21:15:45,424 INFO [M:0;jenkins-hbase4:41365] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6c712854b52544e588c55c2beed54911 2023-07-19 21:15:45,429 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6c712854b52544e588c55c2beed54911 as hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6c712854b52544e588c55c2beed54911 2023-07-19 21:15:45,433 INFO [M:0;jenkins-hbase4:41365] regionserver.HStore(1080): Added hdfs://localhost:45117/user/jenkins/test-data/461aa3e6-7634-a5da-d22a-5f4c97bc24b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6c712854b52544e588c55c2beed54911, entries=22, sequenceid=175, filesize=11.1 K 2023-07-19 21:15:45,434 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78044, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-19 21:15:45,436 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-19 21:15:45,436 DEBUG [M:0;jenkins-hbase4:41365] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-19 21:15:45,440 INFO [M:0;jenkins-hbase4:41365] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-19 21:15:45,440 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-19 21:15:45,440 INFO [M:0;jenkins-hbase4:41365] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41365 2023-07-19 21:15:45,442 DEBUG [M:0;jenkins-hbase4:41365] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41365,1689801341018 already deleted, retry=false 2023-07-19 21:15:45,561 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,561 INFO [M:0;jenkins-hbase4:41365] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41365,1689801341018; zookeeper connection closed. 2023-07-19 21:15:45,562 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): master:41365-0x1017f70bb450000, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,662 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,662 INFO [RS:2;jenkins-hbase4:44179] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44179,1689801341500; zookeeper connection closed. 2023-07-19 21:15:45,662 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44179-0x1017f70bb450003, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,662 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@627bb2ba] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@627bb2ba 2023-07-19 21:15:45,762 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,762 INFO [RS:1;jenkins-hbase4:46655] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46655,1689801341348; zookeeper connection closed. 2023-07-19 21:15:45,762 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:46655-0x1017f70bb450002, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,762 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4a62a26] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4a62a26 2023-07-19 21:15:45,862 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,862 INFO [RS:3;jenkins-hbase4:33189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33189,1689801342851; zookeeper connection closed. 2023-07-19 21:15:45,862 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:33189-0x1017f70bb45000b, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,863 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3dadac0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3dadac0 2023-07-19 21:15:45,963 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,963 INFO [RS:0;jenkins-hbase4:44851] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44851,1689801341197; zookeeper connection closed. 2023-07-19 21:15:45,963 DEBUG [Listener at localhost/43351-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1017f70bb450001, quorum=127.0.0.1:57109, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-19 21:15:45,963 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@407a5c60] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@407a5c60 2023-07-19 21:15:45,963 INFO [Listener at localhost/43351] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-19 21:15:45,963 WARN [Listener at localhost/43351] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:45,967 INFO [Listener at localhost/43351] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:46,070 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:46,070 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-521510616-172.31.14.131-1689801340234 (Datanode Uuid 63677da5-6232-41cd-a867-77733efa8ee1) service to localhost/127.0.0.1:45117 2023-07-19 21:15:46,071 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data5/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,071 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data6/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,072 WARN [Listener at localhost/43351] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:46,075 INFO [Listener at localhost/43351] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:46,177 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:46,178 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-521510616-172.31.14.131-1689801340234 (Datanode Uuid 459e8105-47bb-46ac-b99e-3c1bc9d6098a) service to localhost/127.0.0.1:45117 2023-07-19 21:15:46,178 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data3/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,179 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data4/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,179 WARN [Listener at localhost/43351] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-19 21:15:46,184 INFO [Listener at localhost/43351] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:46,287 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-19 21:15:46,287 WARN [BP-521510616-172.31.14.131-1689801340234 heartbeating to localhost/127.0.0.1:45117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-521510616-172.31.14.131-1689801340234 (Datanode Uuid 8f954bae-a65a-4884-b0bb-b2cf83dcbd48) service to localhost/127.0.0.1:45117 2023-07-19 21:15:46,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data1/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3064665a-5c90-916a-9598-e6d697387183/cluster_3d9e4ea9-803d-f5ec-6a3b-41c9a4fa268d/dfs/data/data2/current/BP-521510616-172.31.14.131-1689801340234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-19 21:15:46,299 INFO [Listener at localhost/43351] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-19 21:15:46,421 INFO [Listener at localhost/43351] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-19 21:15:46,449 INFO [Listener at localhost/43351] hbase.HBaseTestingUtility(1293): Minicluster is down