2023-07-21 15:16:24,842 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b 2023-07-21 15:16:24,863 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 15:16:24,884 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 15:16:24,884 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63, deleteOnExit=true 2023-07-21 15:16:24,885 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 15:16:24,885 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/test.cache.data in system properties and HBase conf 2023-07-21 15:16:24,886 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 15:16:24,886 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir in system properties and HBase conf 2023-07-21 15:16:24,887 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 15:16:24,887 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 15:16:24,887 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 15:16:24,988 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 15:16:25,455 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 15:16:25,460 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:16:25,461 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:16:25,462 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 15:16:25,462 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:16:25,463 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 15:16:25,464 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 15:16:25,464 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:16:25,465 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:16:25,465 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 15:16:25,465 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/nfs.dump.dir in system properties and HBase conf 2023-07-21 15:16:25,466 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir in system properties and HBase conf 2023-07-21 15:16:25,466 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:16:25,468 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 15:16:25,468 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 15:16:26,009 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:16:26,012 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:16:26,340 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 15:16:26,516 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 15:16:26,530 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:16:26,562 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:16:26,595 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/Jetty_localhost_localdomain_43919_hdfs____a18gt8/webapp 2023-07-21 15:16:26,737 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43919 2023-07-21 15:16:26,749 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:16:26,749 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:16:27,254 WARN [Listener at localhost.localdomain/41491] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:16:27,356 WARN [Listener at localhost.localdomain/41491] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:16:27,381 WARN [Listener at localhost.localdomain/41491] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:16:27,390 INFO [Listener at localhost.localdomain/41491] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:16:27,399 INFO [Listener at localhost.localdomain/41491] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/Jetty_localhost_43435_datanode____.wu58ma/webapp 2023-07-21 15:16:27,516 INFO [Listener at localhost.localdomain/41491] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43435 2023-07-21 15:16:27,888 WARN [Listener at localhost.localdomain/36131] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:16:27,909 WARN [Listener at localhost.localdomain/36131] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:16:27,914 WARN [Listener at localhost.localdomain/36131] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:16:27,916 INFO [Listener at localhost.localdomain/36131] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:16:27,922 INFO [Listener at localhost.localdomain/36131] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/Jetty_localhost_34369_datanode____8z3ycc/webapp 2023-07-21 15:16:28,025 INFO [Listener at localhost.localdomain/36131] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34369 2023-07-21 15:16:28,056 WARN [Listener at localhost.localdomain/36081] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:16:28,097 WARN [Listener at localhost.localdomain/36081] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:16:28,105 WARN [Listener at localhost.localdomain/36081] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:16:28,107 INFO [Listener at localhost.localdomain/36081] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:16:28,116 INFO [Listener at localhost.localdomain/36081] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/Jetty_localhost_37915_datanode____.in1en4/webapp 2023-07-21 15:16:28,229 INFO [Listener at localhost.localdomain/36081] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37915 2023-07-21 15:16:28,254 WARN [Listener at localhost.localdomain/34137] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:16:28,516 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdfeb46cff118dcfa: Processing first storage report for DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c from datanode d09cad6b-d2ee-437e-ab86-6ce6541d1774 2023-07-21 15:16:28,517 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdfeb46cff118dcfa: from storage DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c node DatanodeRegistration(127.0.0.1:35525, datanodeUuid=d09cad6b-d2ee-437e-ab86-6ce6541d1774, infoPort=35489, infoSecurePort=0, ipcPort=36081, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3e70fa93e62456c: Processing first storage report for DS-31048df3-0b7f-4703-8d65-58f89b639bd1 from datanode c42877a0-fda7-47c4-a5c8-bd72c0cc32f8 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3e70fa93e62456c: from storage DS-31048df3-0b7f-4703-8d65-58f89b639bd1 node DatanodeRegistration(127.0.0.1:45531, datanodeUuid=c42877a0-fda7-47c4-a5c8-bd72c0cc32f8, infoPort=45665, infoSecurePort=0, ipcPort=34137, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac69dfac7756b513: Processing first storage report for DS-c623773f-f75e-4388-ab85-117ca30bbc47 from datanode bd7d86aa-58e1-4e61-a33e-7d0cbfdebd94 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac69dfac7756b513: from storage DS-c623773f-f75e-4388-ab85-117ca30bbc47 node DatanodeRegistration(127.0.0.1:36795, datanodeUuid=bd7d86aa-58e1-4e61-a33e-7d0cbfdebd94, infoPort=38279, infoSecurePort=0, ipcPort=36131, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdfeb46cff118dcfa: Processing first storage report for DS-64b7daef-fea5-402a-9a9f-8b3ebc62b592 from datanode d09cad6b-d2ee-437e-ab86-6ce6541d1774 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdfeb46cff118dcfa: from storage DS-64b7daef-fea5-402a-9a9f-8b3ebc62b592 node DatanodeRegistration(127.0.0.1:35525, datanodeUuid=d09cad6b-d2ee-437e-ab86-6ce6541d1774, infoPort=35489, infoSecurePort=0, ipcPort=36081, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,518 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3e70fa93e62456c: Processing first storage report for DS-2054a6b9-1180-47b8-876a-51c40e5f16df from datanode c42877a0-fda7-47c4-a5c8-bd72c0cc32f8 2023-07-21 15:16:28,519 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3e70fa93e62456c: from storage DS-2054a6b9-1180-47b8-876a-51c40e5f16df node DatanodeRegistration(127.0.0.1:45531, datanodeUuid=c42877a0-fda7-47c4-a5c8-bd72c0cc32f8, infoPort=45665, infoSecurePort=0, ipcPort=34137, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,519 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac69dfac7756b513: Processing first storage report for DS-a5aa5813-2ca6-4657-b461-c4176b7dc88d from datanode bd7d86aa-58e1-4e61-a33e-7d0cbfdebd94 2023-07-21 15:16:28,519 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac69dfac7756b513: from storage DS-a5aa5813-2ca6-4657-b461-c4176b7dc88d node DatanodeRegistration(127.0.0.1:36795, datanodeUuid=bd7d86aa-58e1-4e61-a33e-7d0cbfdebd94, infoPort=38279, infoSecurePort=0, ipcPort=36131, storageInfo=lv=-57;cid=testClusterID;nsid=354407853;c=1689952586084), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:16:28,729 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b 2023-07-21 15:16:28,802 INFO [Listener at localhost.localdomain/34137] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/zookeeper_0, clientPort=64886, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 15:16:28,819 INFO [Listener at localhost.localdomain/34137] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64886 2023-07-21 15:16:28,827 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:28,829 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:29,533 INFO [Listener at localhost.localdomain/34137] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 with version=8 2023-07-21 15:16:29,533 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/hbase-staging 2023-07-21 15:16:29,548 DEBUG [Listener at localhost.localdomain/34137] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 15:16:29,549 DEBUG [Listener at localhost.localdomain/34137] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 15:16:29,549 DEBUG [Listener at localhost.localdomain/34137] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 15:16:29,549 DEBUG [Listener at localhost.localdomain/34137] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 15:16:30,022 INFO [Listener at localhost.localdomain/34137] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 15:16:30,645 INFO [Listener at localhost.localdomain/34137] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:30,695 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:30,696 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:30,696 INFO [Listener at localhost.localdomain/34137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:30,696 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:30,697 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:30,869 INFO [Listener at localhost.localdomain/34137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:30,976 DEBUG [Listener at localhost.localdomain/34137] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 15:16:31,114 INFO [Listener at localhost.localdomain/34137] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33893 2023-07-21 15:16:31,132 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:31,135 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:31,169 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33893 connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:31,230 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:338930x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:31,238 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33893-0x10188738f0a0000 connected 2023-07-21 15:16:31,296 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:31,297 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:31,302 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:31,312 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33893 2023-07-21 15:16:31,312 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33893 2023-07-21 15:16:31,315 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33893 2023-07-21 15:16:31,316 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33893 2023-07-21 15:16:31,316 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33893 2023-07-21 15:16:31,354 INFO [Listener at localhost.localdomain/34137] log.Log(170): Logging initialized @7299ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 15:16:31,531 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:31,532 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:31,533 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:31,535 INFO [Listener at localhost.localdomain/34137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:16:31,536 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:31,536 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:31,541 INFO [Listener at localhost.localdomain/34137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:31,616 INFO [Listener at localhost.localdomain/34137] http.HttpServer(1146): Jetty bound to port 43235 2023-07-21 15:16:31,619 INFO [Listener at localhost.localdomain/34137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:31,660 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:31,666 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b4a998c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:31,667 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:31,667 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1772dcd7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:31,844 INFO [Listener at localhost.localdomain/34137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:31,861 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:31,861 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:31,864 INFO [Listener at localhost.localdomain/34137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:31,878 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:31,912 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f230d51{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/jetty-0_0_0_0-43235-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7846430800281809547/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:16:31,933 INFO [Listener at localhost.localdomain/34137] server.AbstractConnector(333): Started ServerConnector@55f514a2{HTTP/1.1, (http/1.1)}{0.0.0.0:43235} 2023-07-21 15:16:31,933 INFO [Listener at localhost.localdomain/34137] server.Server(415): Started @7879ms 2023-07-21 15:16:31,938 INFO [Listener at localhost.localdomain/34137] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58, hbase.cluster.distributed=false 2023-07-21 15:16:32,050 INFO [Listener at localhost.localdomain/34137] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:32,050 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,050 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,051 INFO [Listener at localhost.localdomain/34137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:32,051 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,051 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:32,058 INFO [Listener at localhost.localdomain/34137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:32,062 INFO [Listener at localhost.localdomain/34137] ipc.NettyRpcServer(120): Bind to /136.243.18.41:37121 2023-07-21 15:16:32,066 INFO [Listener at localhost.localdomain/34137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:32,074 DEBUG [Listener at localhost.localdomain/34137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:32,075 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,078 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,080 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37121 connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:32,084 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:371210x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:32,085 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:371210x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:32,085 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37121-0x10188738f0a0001 connected 2023-07-21 15:16:32,087 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:32,088 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:32,088 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37121 2023-07-21 15:16:32,089 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37121 2023-07-21 15:16:32,089 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37121 2023-07-21 15:16:32,090 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37121 2023-07-21 15:16:32,090 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37121 2023-07-21 15:16:32,093 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:32,093 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:32,094 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:32,095 INFO [Listener at localhost.localdomain/34137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:32,095 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:32,095 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:32,095 INFO [Listener at localhost.localdomain/34137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:32,097 INFO [Listener at localhost.localdomain/34137] http.HttpServer(1146): Jetty bound to port 44719 2023-07-21 15:16:32,097 INFO [Listener at localhost.localdomain/34137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:32,100 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,100 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d1a76c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:32,101 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,101 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2dbe3bcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:32,220 INFO [Listener at localhost.localdomain/34137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:32,222 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:32,222 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:32,222 INFO [Listener at localhost.localdomain/34137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:32,224 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,228 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@434024b7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/jetty-0_0_0_0-44719-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4442041539513547681/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:32,229 INFO [Listener at localhost.localdomain/34137] server.AbstractConnector(333): Started ServerConnector@6140646b{HTTP/1.1, (http/1.1)}{0.0.0.0:44719} 2023-07-21 15:16:32,229 INFO [Listener at localhost.localdomain/34137] server.Server(415): Started @8175ms 2023-07-21 15:16:32,245 INFO [Listener at localhost.localdomain/34137] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:32,245 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,245 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,246 INFO [Listener at localhost.localdomain/34137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:32,246 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,247 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:32,247 INFO [Listener at localhost.localdomain/34137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:32,249 INFO [Listener at localhost.localdomain/34137] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43323 2023-07-21 15:16:32,250 INFO [Listener at localhost.localdomain/34137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:32,251 DEBUG [Listener at localhost.localdomain/34137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:32,252 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,255 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,257 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43323 connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:32,263 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:433230x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:32,264 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:433230x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:32,267 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:433230x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:32,268 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:433230x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:32,273 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43323-0x10188738f0a0002 connected 2023-07-21 15:16:32,276 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43323 2023-07-21 15:16:32,277 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43323 2023-07-21 15:16:32,277 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43323 2023-07-21 15:16:32,279 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43323 2023-07-21 15:16:32,280 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43323 2023-07-21 15:16:32,283 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:32,283 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:32,284 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:32,284 INFO [Listener at localhost.localdomain/34137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:32,285 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:32,285 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:32,285 INFO [Listener at localhost.localdomain/34137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:32,286 INFO [Listener at localhost.localdomain/34137] http.HttpServer(1146): Jetty bound to port 43393 2023-07-21 15:16:32,286 INFO [Listener at localhost.localdomain/34137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:32,313 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,313 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50290df8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:32,314 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,314 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53519359{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:32,444 INFO [Listener at localhost.localdomain/34137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:32,446 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:32,446 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:32,446 INFO [Listener at localhost.localdomain/34137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:32,448 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,449 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a9bfc3c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/jetty-0_0_0_0-43393-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6743788879267899197/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:32,450 INFO [Listener at localhost.localdomain/34137] server.AbstractConnector(333): Started ServerConnector@3de2d87a{HTTP/1.1, (http/1.1)}{0.0.0.0:43393} 2023-07-21 15:16:32,450 INFO [Listener at localhost.localdomain/34137] server.Server(415): Started @8396ms 2023-07-21 15:16:32,465 INFO [Listener at localhost.localdomain/34137] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:32,465 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,465 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,465 INFO [Listener at localhost.localdomain/34137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:32,465 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:32,466 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:32,466 INFO [Listener at localhost.localdomain/34137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:32,467 INFO [Listener at localhost.localdomain/34137] ipc.NettyRpcServer(120): Bind to /136.243.18.41:46091 2023-07-21 15:16:32,468 INFO [Listener at localhost.localdomain/34137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:32,469 DEBUG [Listener at localhost.localdomain/34137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:32,471 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,472 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,474 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46091 connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:32,478 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:460910x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:32,480 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:460910x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:32,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46091-0x10188738f0a0003 connected 2023-07-21 15:16:32,481 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:32,482 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:32,484 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-21 15:16:32,486 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46091 2023-07-21 15:16:32,492 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46091 2023-07-21 15:16:32,494 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-21 15:16:32,495 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46091 2023-07-21 15:16:32,498 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:32,499 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:32,499 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:32,500 INFO [Listener at localhost.localdomain/34137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:32,500 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:32,500 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:32,501 INFO [Listener at localhost.localdomain/34137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:32,502 INFO [Listener at localhost.localdomain/34137] http.HttpServer(1146): Jetty bound to port 34213 2023-07-21 15:16:32,507 INFO [Listener at localhost.localdomain/34137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:32,513 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,514 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@747ccb69{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:32,514 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,515 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5160d086{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:32,638 INFO [Listener at localhost.localdomain/34137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:32,639 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:32,639 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:32,640 INFO [Listener at localhost.localdomain/34137] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:32,642 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:32,643 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d533b2e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/jetty-0_0_0_0-34213-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2801929951490709737/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:32,645 INFO [Listener at localhost.localdomain/34137] server.AbstractConnector(333): Started ServerConnector@21464387{HTTP/1.1, (http/1.1)}{0.0.0.0:34213} 2023-07-21 15:16:32,645 INFO [Listener at localhost.localdomain/34137] server.Server(415): Started @8591ms 2023-07-21 15:16:32,653 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:32,688 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@77265d22{HTTP/1.1, (http/1.1)}{0.0.0.0:42403} 2023-07-21 15:16:32,689 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @8635ms 2023-07-21 15:16:32,689 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:32,700 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:32,702 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:32,723 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:32,723 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:32,723 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:32,723 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:32,723 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:32,726 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:32,726 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:32,727 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,33893,1689952589806 from backup master directory 2023-07-21 15:16:32,730 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:32,730 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:32,731 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:32,731 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:32,735 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 15:16:32,737 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 15:16:32,827 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/hbase.id with ID: 948fd356-6acc-45fb-b1e3-770257589271 2023-07-21 15:16:32,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:32,907 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:32,960 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0cc3871e to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:32,989 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33075c87, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:33,024 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:33,027 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:16:33,050 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 15:16:33,050 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 15:16:33,051 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:33,055 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:33,056 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:33,097 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store-tmp 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:16:33,145 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:33,145 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:33,145 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:33,147 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/WALs/jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:33,169 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33893%2C1689952589806, suffix=, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/WALs/jenkins-hbase17.apache.org,33893,1689952589806, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/oldWALs, maxLogs=10 2023-07-21 15:16:33,278 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:33,276 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:33,278 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:33,294 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:33,408 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/WALs/jenkins-hbase17.apache.org,33893,1689952589806/jenkins-hbase17.apache.org%2C33893%2C1689952589806.1689952593182 2023-07-21 15:16:33,412 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK]] 2023-07-21 15:16:33,414 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:33,415 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:33,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,422 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,514 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,522 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:16:33,571 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:16:33,590 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:33,597 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,599 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,620 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:33,625 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:33,627 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10675589600, jitterRate=-0.005758240818977356}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:33,627 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:33,629 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:16:33,655 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:16:33,655 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:16:33,659 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:16:33,662 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 15:16:33,707 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 45 msec 2023-07-21 15:16:33,708 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:16:33,737 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 15:16:33,744 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:16:33,753 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 15:16:33,760 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:16:33,766 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:16:33,769 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:33,770 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:16:33,771 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:16:33,785 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:16:33,791 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:33,791 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:33,791 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:33,796 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:33,796 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:33,796 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,33893,1689952589806, sessionid=0x10188738f0a0000, setting cluster-up flag (Was=false) 2023-07-21 15:16:33,817 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:33,826 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:16:33,827 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:33,832 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:33,867 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:16:33,873 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:33,877 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.hbase-snapshot/.tmp 2023-07-21 15:16:33,956 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(951): ClusterId : 948fd356-6acc-45fb-b1e3-770257589271 2023-07-21 15:16:33,957 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(951): ClusterId : 948fd356-6acc-45fb-b1e3-770257589271 2023-07-21 15:16:33,970 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(951): ClusterId : 948fd356-6acc-45fb-b1e3-770257589271 2023-07-21 15:16:33,975 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:33,978 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:33,976 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:33,989 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:16:33,994 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:33,994 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:33,994 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:33,996 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:33,994 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:34,001 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:34,005 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:34,005 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:34,010 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:16:34,017 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:34,022 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:16:34,022 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:16:34,036 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:34,021 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ReadOnlyZKClient(139): Connect 0x70ce3fce to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:34,039 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ReadOnlyZKClient(139): Connect 0x34ea0ffe to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:34,052 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ReadOnlyZKClient(139): Connect 0x312a4151 to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:34,103 DEBUG [RS:2;jenkins-hbase17:46091] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5511f2c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:34,107 DEBUG [RS:2;jenkins-hbase17:46091] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31ff175a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:34,111 DEBUG [RS:0;jenkins-hbase17:37121] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e85dc0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:34,111 DEBUG [RS:0;jenkins-hbase17:37121] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@438f2233, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:34,112 DEBUG [RS:1;jenkins-hbase17:43323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71e88707, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:34,112 DEBUG [RS:1;jenkins-hbase17:43323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40d3716f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:34,139 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:43323 2023-07-21 15:16:34,139 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:37121 2023-07-21 15:16:34,141 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:46091 2023-07-21 15:16:34,146 INFO [RS:0;jenkins-hbase17:37121] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:34,147 INFO [RS:0;jenkins-hbase17:37121] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:34,146 INFO [RS:2;jenkins-hbase17:46091] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:34,147 INFO [RS:2;jenkins-hbase17:46091] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:34,146 INFO [RS:1;jenkins-hbase17:43323] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:34,147 INFO [RS:1;jenkins-hbase17:43323] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:34,147 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:34,147 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:34,147 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:34,152 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:37121, startcode=1689952592049 2023-07-21 15:16:34,152 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:46091, startcode=1689952592464 2023-07-21 15:16:34,153 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:43323, startcode=1689952592244 2023-07-21 15:16:34,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 15:16:34,175 DEBUG [RS:2;jenkins-hbase17:46091] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:34,175 DEBUG [RS:0;jenkins-hbase17:37121] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:34,175 DEBUG [RS:1;jenkins-hbase17:43323] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:34,224 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:34,231 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:34,232 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:34,233 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:34,235 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:34,235 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:34,235 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:34,235 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:34,235 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:16:34,236 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,236 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:34,236 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,242 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33191, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:34,242 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51895, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:34,242 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:33913, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:34,253 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952624253 2023-07-21 15:16:34,257 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:16:34,260 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:16:34,260 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 15:16:34,260 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:34,263 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:16:34,263 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:34,275 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:34,278 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:34,283 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:16:34,289 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:16:34,290 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:16:34,290 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:16:34,293 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,296 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:16:34,305 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:16:34,306 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:16:34,309 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:16:34,310 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:16:34,313 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:16:34,313 WARN [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:16:34,316 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952594312,5,FailOnTimeoutGroup] 2023-07-21 15:16:34,321 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:16:34,321 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:16:34,321 WARN [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:16:34,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952594321,5,FailOnTimeoutGroup] 2023-07-21 15:16:34,321 WARN [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:16:34,322 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,322 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:16:34,323 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,324 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,377 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:34,378 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:34,378 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 2023-07-21 15:16:34,407 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:34,409 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:34,414 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info 2023-07-21 15:16:34,414 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:43323, startcode=1689952592244 2023-07-21 15:16:34,415 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:34,416 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,417 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:34,420 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:34,421 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,422 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:34,423 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:34,423 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:46091, startcode=1689952592464 2023-07-21 15:16:34,423 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:37121, startcode=1689952592049 2023-07-21 15:16:34,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:34,425 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:16:34,428 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table 2023-07-21 15:16:34,429 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,430 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:34,430 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:34,430 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:16:34,432 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,435 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 2023-07-21 15:16:34,435 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 2023-07-21 15:16:34,436 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41491 2023-07-21 15:16:34,435 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:34,436 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43235 2023-07-21 15:16:34,436 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41491 2023-07-21 15:16:34,436 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,437 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43235 2023-07-21 15:16:34,437 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:16:34,437 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 2023-07-21 15:16:34,438 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41491 2023-07-21 15:16:34,438 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43235 2023-07-21 15:16:34,439 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:34,440 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:34,444 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:34,446 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:34,447 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,447 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,447 WARN [RS:1;jenkins-hbase17:43323] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:34,448 INFO [RS:1;jenkins-hbase17:43323] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:34,448 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,46091,1689952592464] 2023-07-21 15:16:34,448 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,448 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,37121,1689952592049] 2023-07-21 15:16:34,448 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43323,1689952592244] 2023-07-21 15:16:34,447 WARN [RS:0;jenkins-hbase17:37121] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:34,447 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,452 WARN [RS:2;jenkins-hbase17:46091] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:34,454 INFO [RS:2;jenkins-hbase17:46091] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:34,455 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,452 INFO [RS:0;jenkins-hbase17:37121] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:34,456 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,456 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:34,492 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:34,493 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10611495840, jitterRate=-0.011727437376976013}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:34,497 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:34,497 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:16:34,497 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:16:34,497 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:16:34,498 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:16:34,498 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:16:34,503 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:34,504 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:16:34,510 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:16:34,511 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 15:16:34,511 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,512 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,511 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,513 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,513 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,514 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,514 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,514 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,514 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:16:34,536 DEBUG [RS:0;jenkins-hbase17:37121] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:34,537 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:34,540 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:34,544 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:16:34,554 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 15:16:34,558 INFO [RS:0;jenkins-hbase17:37121] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:34,558 INFO [RS:1;jenkins-hbase17:43323] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:34,559 INFO [RS:2;jenkins-hbase17:46091] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:34,642 INFO [RS:2;jenkins-hbase17:46091] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:34,647 INFO [RS:1;jenkins-hbase17:43323] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:34,653 INFO [RS:0;jenkins-hbase17:37121] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:34,655 INFO [RS:0;jenkins-hbase17:37121] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:34,655 INFO [RS:1;jenkins-hbase17:43323] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:34,656 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,655 INFO [RS:2;jenkins-hbase17:46091] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:34,656 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,657 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,658 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:34,658 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:34,658 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:34,669 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,669 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,670 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,669 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,670 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,671 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,671 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,671 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,672 DEBUG [RS:0;jenkins-hbase17:37121] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,671 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,671 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,673 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:34,674 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:2;jenkins-hbase17:46091] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 DEBUG [RS:1;jenkins-hbase17:43323] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:34,674 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,675 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,675 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,680 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,680 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,680 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,681 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,681 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,681 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,692 INFO [RS:0;jenkins-hbase17:37121] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:34,696 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,37121,1689952592049-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,709 DEBUG [jenkins-hbase17:33893] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:34,713 INFO [RS:1;jenkins-hbase17:43323] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:34,714 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43323,1689952592244-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,719 INFO [RS:2;jenkins-hbase17:46091] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:34,719 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46091,1689952592464-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:34,720 INFO [RS:0;jenkins-hbase17:37121] regionserver.Replication(203): jenkins-hbase17.apache.org,37121,1689952592049 started 2023-07-21 15:16:34,720 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,37121,1689952592049, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:37121, sessionid=0x10188738f0a0001 2023-07-21 15:16:34,724 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:34,724 DEBUG [RS:0;jenkins-hbase17:37121] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,724 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37121,1689952592049' 2023-07-21 15:16:34,724 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:34,725 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,37121,1689952592049' 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:34,726 DEBUG [RS:0;jenkins-hbase17:37121] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:34,727 DEBUG [jenkins-hbase17:33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:34,728 DEBUG [RS:0;jenkins-hbase17:37121] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:34,728 INFO [RS:0;jenkins-hbase17:37121] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:34,728 INFO [RS:0;jenkins-hbase17:37121] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:34,728 DEBUG [jenkins-hbase17:33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:34,728 DEBUG [jenkins-hbase17:33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:34,728 DEBUG [jenkins-hbase17:33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:34,728 DEBUG [jenkins-hbase17:33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:34,729 INFO [RS:1;jenkins-hbase17:43323] regionserver.Replication(203): jenkins-hbase17.apache.org,43323,1689952592244 started 2023-07-21 15:16:34,729 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43323,1689952592244, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43323, sessionid=0x10188738f0a0002 2023-07-21 15:16:34,729 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:34,730 DEBUG [RS:1;jenkins-hbase17:43323] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,730 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43323,1689952592244' 2023-07-21 15:16:34,730 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:34,732 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:34,732 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:34,732 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:34,732 DEBUG [RS:1;jenkins-hbase17:43323] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:34,733 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43323,1689952592244' 2023-07-21 15:16:34,733 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:34,733 DEBUG [RS:1;jenkins-hbase17:43323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:34,733 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37121,1689952592049, state=OPENING 2023-07-21 15:16:34,734 DEBUG [RS:1;jenkins-hbase17:43323] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:34,734 INFO [RS:1;jenkins-hbase17:43323] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:34,734 INFO [RS:1;jenkins-hbase17:43323] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:34,740 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 15:16:34,741 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:34,742 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:34,745 INFO [RS:2;jenkins-hbase17:46091] regionserver.Replication(203): jenkins-hbase17.apache.org,46091,1689952592464 started 2023-07-21 15:16:34,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:34,746 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,46091,1689952592464, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:46091, sessionid=0x10188738f0a0003 2023-07-21 15:16:34,746 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:34,746 DEBUG [RS:2;jenkins-hbase17:46091] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,746 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46091,1689952592464' 2023-07-21 15:16:34,746 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:34,747 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46091,1689952592464' 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:34,748 DEBUG [RS:2;jenkins-hbase17:46091] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:34,749 DEBUG [RS:2;jenkins-hbase17:46091] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:34,749 INFO [RS:2;jenkins-hbase17:46091] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:34,749 INFO [RS:2;jenkins-hbase17:46091] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:34,837 WARN [ReadOnlyZKClient-127.0.0.1:64886@0x0cc3871e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:16:34,843 INFO [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37121%2C1689952592049, suffix=, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,37121,1689952592049, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:34,843 INFO [RS:1;jenkins-hbase17:43323] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43323%2C1689952592244, suffix=, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,43323,1689952592244, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:34,852 INFO [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46091%2C1689952592464, suffix=, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,46091,1689952592464, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:34,865 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:34,865 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:34,867 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:34,870 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:34,876 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:34,876 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:34,876 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:34,888 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:34,888 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:34,888 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:34,888 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:34,889 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37121] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:60278 deadline: 1689952654889, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,890 INFO [RS:1;jenkins-hbase17:43323] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,43323,1689952592244/jenkins-hbase17.apache.org%2C43323%2C1689952592244.1689952594846 2023-07-21 15:16:34,891 DEBUG [RS:1;jenkins-hbase17:43323] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK], DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK]] 2023-07-21 15:16:34,899 INFO [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,37121,1689952592049/jenkins-hbase17.apache.org%2C37121%2C1689952592049.1689952594846 2023-07-21 15:16:34,899 DEBUG [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK], DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK]] 2023-07-21 15:16:34,903 INFO [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,46091,1689952592464/jenkins-hbase17.apache.org%2C46091%2C1689952592464.1689952594854 2023-07-21 15:16:34,904 DEBUG [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK], DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK]] 2023-07-21 15:16:34,924 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:34,928 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:34,932 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60290, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:34,944 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:16:34,945 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:34,949 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C37121%2C1689952592049.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,37121,1689952592049, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:34,965 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:34,965 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:34,966 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:34,971 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,37121,1689952592049/jenkins-hbase17.apache.org%2C37121%2C1689952592049.meta.1689952594950.meta 2023-07-21 15:16:34,972 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK], DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK]] 2023-07-21 15:16:34,972 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:34,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:34,976 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:16:34,978 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:16:34,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:16:34,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:34,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:16:34,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:16:34,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:34,988 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info 2023-07-21 15:16:34,990 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info 2023-07-21 15:16:34,990 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:34,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:34,993 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:34,993 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:34,993 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:34,994 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,994 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:34,996 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table 2023-07-21 15:16:34,996 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table 2023-07-21 15:16:34,996 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:34,997 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:34,998 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:35,001 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:35,005 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:35,008 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:35,010 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10158773120, jitterRate=-0.05389052629470825}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:35,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:35,029 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689952594918 2023-07-21 15:16:35,052 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37121,1689952592049, state=OPEN 2023-07-21 15:16:35,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:16:35,057 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:16:35,059 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:35,059 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:35,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 15:16:35,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37121,1689952592049 in 314 msec 2023-07-21 15:16:35,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 15:16:35,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 545 msec 2023-07-21 15:16:35,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0520 sec 2023-07-21 15:16:35,090 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952595090, completionTime=-1 2023-07-21 15:16:35,090 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 15:16:35,090 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:16:35,159 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:16:35,159 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952655159 2023-07-21 15:16:35,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952715160 2023-07-21 15:16:35,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 69 msec 2023-07-21 15:16:35,184 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33893,1689952589806-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:35,184 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33893,1689952589806-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:35,184 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33893,1689952589806-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:35,186 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:33893, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:35,187 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:35,195 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 15:16:35,207 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 15:16:35,208 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:35,220 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 15:16:35,223 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:35,227 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:35,251 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,255 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 empty. 2023-07-21 15:16:35,256 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,256 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 15:16:35,306 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:35,309 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6b64a17fcefcc8a68fcd4f0dcc651985, NAME => 'hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:35,327 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:35,327 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6b64a17fcefcc8a68fcd4f0dcc651985, disabling compactions & flushes 2023-07-21 15:16:35,328 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,328 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,328 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. after waiting 0 ms 2023-07-21 15:16:35,328 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,328 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,328 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6b64a17fcefcc8a68fcd4f0dcc651985: 2023-07-21 15:16:35,333 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:35,354 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952595336"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952595336"}]},"ts":"1689952595336"} 2023-07-21 15:16:35,395 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:35,400 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:35,407 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952595401"}]},"ts":"1689952595401"} 2023-07-21 15:16:35,415 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 15:16:35,422 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:35,427 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:16:35,433 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:35,435 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:35,435 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:35,435 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:35,435 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:35,435 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:35,437 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:35,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, ASSIGN}] 2023-07-21 15:16:35,445 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,447 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, ASSIGN 2023-07-21 15:16:35,448 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 empty. 2023-07-21 15:16:35,449 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,449 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 15:16:35,450 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:35,530 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:35,532 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a1be046ee9a2834d581cd55948dca519, NAME => 'hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:35,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:35,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a1be046ee9a2834d581cd55948dca519, disabling compactions & flushes 2023-07-21 15:16:35,596 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:35,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:35,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. after waiting 0 ms 2023-07-21 15:16:35,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:35,597 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:35,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:35,601 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:35,605 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:35,605 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952595604"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952595604"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952595604"}]},"ts":"1689952595604"} 2023-07-21 15:16:35,610 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:35,613 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE; OpenRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:35,613 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952595613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952595613"}]},"ts":"1689952595613"} 2023-07-21 15:16:35,631 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:35,633 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:35,633 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952595633"}]},"ts":"1689952595633"} 2023-07-21 15:16:35,636 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 15:16:35,640 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:35,641 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:35,641 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:35,641 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:35,641 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:35,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, ASSIGN}] 2023-07-21 15:16:35,645 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, ASSIGN 2023-07-21 15:16:35,647 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:35,784 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6b64a17fcefcc8a68fcd4f0dcc651985, NAME => 'hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:35,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:35,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,789 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,792 DEBUG [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info 2023-07-21 15:16:35,792 DEBUG [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info 2023-07-21 15:16:35,792 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6b64a17fcefcc8a68fcd4f0dcc651985 columnFamilyName info 2023-07-21 15:16:35,794 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] regionserver.HStore(310): Store=6b64a17fcefcc8a68fcd4f0dcc651985/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:35,795 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,798 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:35,799 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:35,800 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952595799"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952595799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952595799"}]},"ts":"1689952595799"} 2023-07-21 15:16:35,801 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:35,807 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:35,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:35,811 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6b64a17fcefcc8a68fcd4f0dcc651985; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10367669920, jitterRate=-0.03443549573421478}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:35,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6b64a17fcefcc8a68fcd4f0dcc651985: 2023-07-21 15:16:35,814 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985., pid=7, masterSystemTime=1689952595776 2023-07-21 15:16:35,817 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:35,820 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:35,820 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952595819"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952595819"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952595819"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952595819"}]},"ts":"1689952595819"} 2023-07-21 15:16:35,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 15:16:35,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; OpenRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,37121,1689952592049 in 210 msec 2023-07-21 15:16:35,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-21 15:16:35,835 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, ASSIGN in 390 msec 2023-07-21 15:16:35,837 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:35,837 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952595837"}]},"ts":"1689952595837"} 2023-07-21 15:16:35,840 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 15:16:35,843 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:35,847 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 634 msec 2023-07-21 15:16:35,927 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 15:16:35,928 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:16:35,928 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:35,968 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 15:16:35,980 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:35,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a1be046ee9a2834d581cd55948dca519, NAME => 'hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:35,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:35,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. service=MultiRowMutationService 2023-07-21 15:16:35,983 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:35,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:35,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,983 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,986 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,989 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:35,989 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:35,990 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a1be046ee9a2834d581cd55948dca519 columnFamilyName m 2023-07-21 15:16:35,991 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(310): Store=a1be046ee9a2834d581cd55948dca519/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:35,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:35,997 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:16:35,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:36,007 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 44 msec 2023-07-21 15:16:36,008 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:36,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:36,016 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a1be046ee9a2834d581cd55948dca519; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@36d33cb0, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:36,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:36,018 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519., pid=9, masterSystemTime=1689952595962 2023-07-21 15:16:36,021 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:16:36,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:36,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:36,024 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,024 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952596024"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952596024"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952596024"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952596024"}]},"ts":"1689952596024"} 2023-07-21 15:16:36,025 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 15:16:36,026 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:16:36,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-21 15:16:36,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,37121,1689952592049 in 221 msec 2023-07-21 15:16:36,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-21 15:16:36,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, ASSIGN in 392 msec 2023-07-21 15:16:36,050 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:16:36,058 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 36 msec 2023-07-21 15:16:36,060 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:36,060 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952596060"}]},"ts":"1689952596060"} 2023-07-21 15:16:36,064 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 15:16:36,070 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:16:36,072 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:16:36,073 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.342sec 2023-07-21 15:16:36,073 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:36,075 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 15:16:36,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 652 msec 2023-07-21 15:16:36,077 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:16:36,077 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:16:36,079 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33893,1689952589806-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:16:36,079 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33893,1689952589806-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:16:36,088 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:16:36,143 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:16:36,143 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:16:36,164 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ReadOnlyZKClient(139): Connect 0x76791164 to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:36,174 DEBUG [Listener at localhost.localdomain/34137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bc3da4c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:36,189 DEBUG [hconnection-0x5a0ffa86-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:36,203 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:36,211 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:36,211 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:36,214 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:16:36,215 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:16:36,216 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:36,219 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:16:36,226 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:16:36,229 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53818, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:16:36,240 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:16:36,241 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:36,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:16:36,247 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ReadOnlyZKClient(139): Connect 0x7aaf6e90 to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:36,253 DEBUG [Listener at localhost.localdomain/34137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6453576e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:36,253 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:36,256 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:36,258 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10188738f0a000a connected 2023-07-21 15:16:36,294 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=421, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=636, ProcessCount=186, AvailableMemoryMB=1790 2023-07-21 15:16:36,297 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 15:16:36,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:36,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:36,361 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:36,372 INFO [Listener at localhost.localdomain/34137] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:36,373 INFO [Listener at localhost.localdomain/34137] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:36,376 INFO [Listener at localhost.localdomain/34137] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41557 2023-07-21 15:16:36,377 INFO [Listener at localhost.localdomain/34137] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:36,378 DEBUG [Listener at localhost.localdomain/34137] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:36,379 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:36,384 INFO [Listener at localhost.localdomain/34137] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:36,388 INFO [Listener at localhost.localdomain/34137] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41557 connecting to ZooKeeper ensemble=127.0.0.1:64886 2023-07-21 15:16:36,392 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:415570x0, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:36,393 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(162): regionserver:415570x0, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:36,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41557-0x10188738f0a000b connected 2023-07-21 15:16:36,395 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:16:36,397 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ZKUtil(164): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:36,398 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41557 2023-07-21 15:16:36,398 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41557 2023-07-21 15:16:36,400 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41557 2023-07-21 15:16:36,401 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41557 2023-07-21 15:16:36,401 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41557 2023-07-21 15:16:36,403 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:36,403 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:36,403 INFO [Listener at localhost.localdomain/34137] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:36,403 INFO [Listener at localhost.localdomain/34137] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:36,404 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:36,404 INFO [Listener at localhost.localdomain/34137] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:36,404 INFO [Listener at localhost.localdomain/34137] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:36,404 INFO [Listener at localhost.localdomain/34137] http.HttpServer(1146): Jetty bound to port 45273 2023-07-21 15:16:36,404 INFO [Listener at localhost.localdomain/34137] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:36,405 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:36,406 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@726beee5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:36,406 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:36,406 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29f60ee1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:36,515 INFO [Listener at localhost.localdomain/34137] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:36,517 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:36,517 INFO [Listener at localhost.localdomain/34137] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:36,518 INFO [Listener at localhost.localdomain/34137] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:16:36,519 INFO [Listener at localhost.localdomain/34137] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:36,520 INFO [Listener at localhost.localdomain/34137] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5bd7650f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/java.io.tmpdir/jetty-0_0_0_0-45273-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7952315075875461360/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:36,522 INFO [Listener at localhost.localdomain/34137] server.AbstractConnector(333): Started ServerConnector@4bb7d136{HTTP/1.1, (http/1.1)}{0.0.0.0:45273} 2023-07-21 15:16:36,523 INFO [Listener at localhost.localdomain/34137] server.Server(415): Started @12468ms 2023-07-21 15:16:36,526 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(951): ClusterId : 948fd356-6acc-45fb-b1e3-770257589271 2023-07-21 15:16:36,527 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:36,529 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:36,529 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:36,531 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:36,533 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ReadOnlyZKClient(139): Connect 0x5562d093 to 127.0.0.1:64886 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:36,549 DEBUG [RS:3;jenkins-hbase17:41557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ba5344f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:36,549 DEBUG [RS:3;jenkins-hbase17:41557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f9abbe3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:36,560 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:41557 2023-07-21 15:16:36,560 INFO [RS:3;jenkins-hbase17:41557] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:36,561 INFO [RS:3;jenkins-hbase17:41557] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:36,561 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:36,563 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,33893,1689952589806 with isa=jenkins-hbase17.apache.org/136.243.18.41:41557, startcode=1689952596371 2023-07-21 15:16:36,563 DEBUG [RS:3;jenkins-hbase17:41557] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:36,577 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34039, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:36,577 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,577 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:36,578 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58 2023-07-21 15:16:36,578 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41491 2023-07-21 15:16:36,578 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43235 2023-07-21 15:16:36,580 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:36,580 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:36,581 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:36,581 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,581 WARN [RS:3;jenkins-hbase17:41557] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:36,581 INFO [RS:3;jenkins-hbase17:41557] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:36,582 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:36,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:36,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:36,583 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,583 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,583 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:36,584 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:36,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:36,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:36,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,584 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:16:36,584 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:36,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,597 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,33893,1689952589806] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:16:36,597 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41557,1689952596371] 2023-07-21 15:16:36,598 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:36,598 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,599 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:36,599 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ZKUtil(162): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,602 DEBUG [RS:3;jenkins-hbase17:41557] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:36,603 INFO [RS:3;jenkins-hbase17:41557] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:36,608 INFO [RS:3;jenkins-hbase17:41557] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:36,609 INFO [RS:3;jenkins-hbase17:41557] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:36,609 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,609 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:36,612 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:36,612 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,613 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,613 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,613 DEBUG [RS:3;jenkins-hbase17:41557] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:36,624 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,624 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,624 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,636 INFO [RS:3;jenkins-hbase17:41557] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:36,636 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41557,1689952596371-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:36,647 INFO [RS:3;jenkins-hbase17:41557] regionserver.Replication(203): jenkins-hbase17.apache.org,41557,1689952596371 started 2023-07-21 15:16:36,648 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41557,1689952596371, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41557, sessionid=0x10188738f0a000b 2023-07-21 15:16:36,648 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:36,648 DEBUG [RS:3;jenkins-hbase17:41557] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,648 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41557,1689952596371' 2023-07-21 15:16:36,648 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:36,649 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41557,1689952596371' 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:36,650 DEBUG [RS:3;jenkins-hbase17:41557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:36,651 DEBUG [RS:3;jenkins-hbase17:41557] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:36,651 INFO [RS:3;jenkins-hbase17:41557] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:36,651 INFO [RS:3;jenkins-hbase17:41557] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:36,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:36,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:36,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:36,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:36,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:36,669 DEBUG [hconnection-0x75c83904-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:36,673 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:36,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:36,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:36,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:36,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:36,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:53818 deadline: 1689953796698, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:36,701 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:36,704 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:36,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:36,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:36,707 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:36,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:36,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:36,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:36,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:36,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:36,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:36,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:36,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:36,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:36,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:36,764 INFO [RS:3;jenkins-hbase17:41557] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41557%2C1689952596371, suffix=, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,41557,1689952596371, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:36,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:36,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:36,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:36,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(238): Moving server region 6b64a17fcefcc8a68fcd4f0dcc651985, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:36,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:36,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:36,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:36,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:36,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, REOPEN/MOVE 2023-07-21 15:16:36,801 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, REOPEN/MOVE 2023-07-21 15:16:36,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(238): Moving server region a1be046ee9a2834d581cd55948dca519, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:36,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:36,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:36,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:36,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:36,803 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,803 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952596803"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952596803"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952596803"}]},"ts":"1689952596803"} 2023-07-21 15:16:36,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE 2023-07-21 15:16:36,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:36,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:36,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:36,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:36,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:36,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:36,811 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE 2023-07-21 15:16:36,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:36,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 15:16:36,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-21 15:16:36,823 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 15:16:36,823 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:36,824 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,829 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952596824"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952596824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952596824"}]},"ts":"1689952596824"} 2023-07-21 15:16:36,829 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:36,831 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:36,831 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,37121,1689952592049, state=CLOSING 2023-07-21 15:16:36,834 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:36,834 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=14, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:36,834 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:36,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:36,844 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:36,845 INFO [RS:3;jenkins-hbase17:41557] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,41557,1689952596371/jenkins-hbase17.apache.org%2C41557%2C1689952596371.1689952596772 2023-07-21 15:16:36,846 DEBUG [RS:3;jenkins-hbase17:41557] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK], DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK]] 2023-07-21 15:16:36,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:36,990 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 15:16:36,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6b64a17fcefcc8a68fcd4f0dcc651985, disabling compactions & flushes 2023-07-21 15:16:36,992 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:16:36,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:36,992 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:16:36,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:36,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. after waiting 0 ms 2023-07-21 15:16:36,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:36,992 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:16:36,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:16:36,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:16:36,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 6b64a17fcefcc8a68fcd4f0dcc651985 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 15:16:36,993 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.22 KB heapSize=6.16 KB 2023-07-21 15:16:37,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/.tmp/info/d643ca5d5ca540fc82ad538ada19ec86 2023-07-21 15:16:37,103 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.04 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/info/5c581972cd3541bd860ceaa2d392ecf2 2023-07-21 15:16:37,154 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/.tmp/info/d643ca5d5ca540fc82ad538ada19ec86 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info/d643ca5d5ca540fc82ad538ada19ec86 2023-07-21 15:16:37,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info/d643ca5d5ca540fc82ad538ada19ec86, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 15:16:37,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6b64a17fcefcc8a68fcd4f0dcc651985 in 179ms, sequenceid=6, compaction requested=false 2023-07-21 15:16:37,174 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 15:16:37,199 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/table/34fc2ca56c1e4cd4a082d53396a08a96 2023-07-21 15:16:37,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 15:16:37,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:37,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6b64a17fcefcc8a68fcd4f0dcc651985: 2023-07-21 15:16:37,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 6b64a17fcefcc8a68fcd4f0dcc651985 move to jenkins-hbase17.apache.org,46091,1689952592464 record at close sequenceid=6 2023-07-21 15:16:37,208 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:37,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:37,215 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/info/5c581972cd3541bd860ceaa2d392ecf2 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info/5c581972cd3541bd860ceaa2d392ecf2 2023-07-21 15:16:37,227 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info/5c581972cd3541bd860ceaa2d392ecf2, entries=22, sequenceid=16, filesize=7.3 K 2023-07-21 15:16:37,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/table/34fc2ca56c1e4cd4a082d53396a08a96 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table/34fc2ca56c1e4cd4a082d53396a08a96 2023-07-21 15:16:37,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table/34fc2ca56c1e4cd4a082d53396a08a96, entries=4, sequenceid=16, filesize=4.8 K 2023-07-21 15:16:37,246 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.22 KB/3296, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 253ms, sequenceid=16, compaction requested=false 2023-07-21 15:16:37,246 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:16:37,269 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-21 15:16:37,270 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:37,271 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:37,271 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:16:37,271 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase17.apache.org,46091,1689952592464 record at close sequenceid=16 2023-07-21 15:16:37,273 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 15:16:37,274 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 15:16:37,278 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=14 2023-07-21 15:16:37,279 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=14, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,37121,1689952592049 in 440 msec 2023-07-21 15:16:37,280 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:37,431 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:37,432 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46091,1689952592464, state=OPENING 2023-07-21 15:16:37,434 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:37,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:37,434 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:37,593 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:37,593 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:37,597 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:49834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:37,602 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:16:37,602 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:37,605 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46091%2C1689952592464.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,46091,1689952592464, archiveDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs, maxLogs=32 2023-07-21 15:16:37,631 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK] 2023-07-21 15:16:37,633 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK] 2023-07-21 15:16:37,633 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK] 2023-07-21 15:16:37,645 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/WALs/jenkins-hbase17.apache.org,46091,1689952592464/jenkins-hbase17.apache.org%2C46091%2C1689952592464.meta.1689952597607.meta 2023-07-21 15:16:37,645 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45531,DS-31048df3-0b7f-4703-8d65-58f89b639bd1,DISK], DatanodeInfoWithStorage[127.0.0.1:35525,DS-5d731871-d680-4e5a-ad7e-ad8f2d4e774c,DISK], DatanodeInfoWithStorage[127.0.0.1:36795,DS-c623773f-f75e-4388-ab85-117ca30bbc47,DISK]] 2023-07-21 15:16:37,645 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:16:37,646 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:16:37,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:16:37,648 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:37,649 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info 2023-07-21 15:16:37,650 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info 2023-07-21 15:16:37,650 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:37,666 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info/5c581972cd3541bd860ceaa2d392ecf2 2023-07-21 15:16:37,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:37,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:37,670 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:37,670 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:37,671 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:37,671 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:37,672 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:37,673 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table 2023-07-21 15:16:37,673 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table 2023-07-21 15:16:37,674 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:37,699 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table/34fc2ca56c1e4cd4a082d53396a08a96 2023-07-21 15:16:37,699 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:37,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:37,706 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740 2023-07-21 15:16:37,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:37,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:37,721 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10498871200, jitterRate=-0.022216424345970154}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:37,721 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:37,723 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689952597593 2023-07-21 15:16:37,739 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:16:37,740 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:16:37,741 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46091,1689952592464, state=OPEN 2023-07-21 15:16:37,742 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:37,743 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:37,746 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=CLOSED 2023-07-21 15:16:37,746 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952597746"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952597746"}]},"ts":"1689952597746"} 2023-07-21 15:16:37,747 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37121] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 217 connection: 136.243.18.41:60278 deadline: 1689952657747, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=16. 2023-07-21 15:16:37,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-21 15:16:37,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46091,1689952592464 in 309 msec 2023-07-21 15:16:37,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 940 msec 2023-07-21 15:16:37,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 15:16:37,849 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:37,852 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:32782, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:37,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-21 15:16:37,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,37121,1689952592049 in 1.0470 sec 2023-07-21 15:16:37,870 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:37,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:37,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a1be046ee9a2834d581cd55948dca519, disabling compactions & flushes 2023-07-21 15:16:37,898 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:37,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:37,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. after waiting 0 ms 2023-07-21 15:16:37,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:37,898 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing a1be046ee9a2834d581cd55948dca519 1/1 column families, dataSize=1.40 KB heapSize=2.40 KB 2023-07-21 15:16:37,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.40 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/852fd3871d9d424fa62a04581adf7953 2023-07-21 15:16:37,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/852fd3871d9d424fa62a04581adf7953 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/852fd3871d9d424fa62a04581adf7953 2023-07-21 15:16:38,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/852fd3871d9d424fa62a04581adf7953, entries=3, sequenceid=9, filesize=5.2 K 2023-07-21 15:16:38,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.40 KB/1437, heapSize ~2.38 KB/2440, currentSize=0 B/0 for a1be046ee9a2834d581cd55948dca519 in 113ms, sequenceid=9, compaction requested=false 2023-07-21 15:16:38,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:16:38,020 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:38,021 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:38,021 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952598021"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952598021"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952598021"}]},"ts":"1689952598021"} 2023-07-21 15:16:38,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:38,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 15:16:38,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:38,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:38,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:38,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding a1be046ee9a2834d581cd55948dca519 move to jenkins-hbase17.apache.org,43323,1689952592244 record at close sequenceid=9 2023-07-21 15:16:38,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,055 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=CLOSED 2023-07-21 15:16:38,055 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952598055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952598055"}]},"ts":"1689952598055"} 2023-07-21 15:16:38,063 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-21 15:16:38,063 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; CloseRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,37121,1689952592049 in 1.2200 sec 2023-07-21 15:16:38,064 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:38,189 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:38,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6b64a17fcefcc8a68fcd4f0dcc651985, NAME => 'hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:38,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:38,190 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,190 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,192 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,194 DEBUG [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info 2023-07-21 15:16:38,194 DEBUG [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info 2023-07-21 15:16:38,195 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6b64a17fcefcc8a68fcd4f0dcc651985 columnFamilyName info 2023-07-21 15:16:38,209 DEBUG [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/info/d643ca5d5ca540fc82ad538ada19ec86 2023-07-21 15:16:38,210 INFO [StoreOpener-6b64a17fcefcc8a68fcd4f0dcc651985-1] regionserver.HStore(310): Store=6b64a17fcefcc8a68fcd4f0dcc651985/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:38,211 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,214 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:38,215 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:38,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952598214"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952598214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952598214"}]},"ts":"1689952598214"} 2023-07-21 15:16:38,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:38,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:16:38,226 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 6b64a17fcefcc8a68fcd4f0dcc651985; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10747078240, jitterRate=8.996576070785522E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:38,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 6b64a17fcefcc8a68fcd4f0dcc651985: 2023-07-21 15:16:38,228 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985., pid=19, masterSystemTime=1689952598181 2023-07-21 15:16:38,233 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:38,233 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:16:38,245 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=6b64a17fcefcc8a68fcd4f0dcc651985, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:38,246 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952598245"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952598245"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952598245"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952598245"}]},"ts":"1689952598245"} 2023-07-21 15:16:38,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-21 15:16:38,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure 6b64a17fcefcc8a68fcd4f0dcc651985, server=jenkins-hbase17.apache.org,46091,1689952592464 in 226 msec 2023-07-21 15:16:38,264 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6b64a17fcefcc8a68fcd4f0dcc651985, REOPEN/MOVE in 1.4610 sec 2023-07-21 15:16:38,372 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:38,373 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:38,377 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:38,383 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:38,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a1be046ee9a2834d581cd55948dca519, NAME => 'hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:38,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:38,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. service=MultiRowMutationService 2023-07-21 15:16:38,384 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:38,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:38,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,386 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,389 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:38,389 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:38,390 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a1be046ee9a2834d581cd55948dca519 columnFamilyName m 2023-07-21 15:16:38,401 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/852fd3871d9d424fa62a04581adf7953 2023-07-21 15:16:38,401 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(310): Store=a1be046ee9a2834d581cd55948dca519/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:38,404 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,407 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,411 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:38,412 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a1be046ee9a2834d581cd55948dca519; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@34058a31, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:38,412 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:38,413 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519., pid=20, masterSystemTime=1689952598372 2023-07-21 15:16:38,419 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:38,420 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:38,421 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:38,421 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952598421"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952598421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952598421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952598421"}]},"ts":"1689952598421"} 2023-07-21 15:16:38,426 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-21 15:16:38,427 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,43323,1689952592244 in 205 msec 2023-07-21 15:16:38,429 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE in 1.6240 sec 2023-07-21 15:16:38,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to default 2023-07-21 15:16:38,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:38,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:38,823 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37121] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:60312 deadline: 1689952658823, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=43323 startCode=1689952592244. As of locationSeqNum=9. 2023-07-21 15:16:38,930 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37121] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:60312 deadline: 1689952658930, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=16. 2023-07-21 15:16:39,033 DEBUG [hconnection-0x75c83904-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:39,042 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:32794, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:39,060 DEBUG [hconnection-0x75c83904-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:39,068 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34994, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:39,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:39,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:39,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:39,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:39,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:39,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:39,097 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:39,100 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37121] ipc.CallRunner(144): callId: 51 service: ClientService methodName: ExecService size: 626 connection: 136.243.18.41:60278 deadline: 1689952659100, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=43323 startCode=1689952592244. As of locationSeqNum=9. 2023-07-21 15:16:39,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-21 15:16:39,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:39,205 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:39,207 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:39,211 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:39,211 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:39,212 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:39,213 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:39,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:39,238 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:39,250 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:39,251 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 empty. 2023-07-21 15:16:39,257 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:39,257 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:39,258 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc empty. 2023-07-21 15:16:39,260 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:39,268 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:39,272 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:39,273 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 empty. 2023-07-21 15:16:39,273 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 empty. 2023-07-21 15:16:39,274 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:39,277 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:39,280 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:39,282 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 empty. 2023-07-21 15:16:39,283 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:39,283 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 15:16:39,352 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:39,357 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 627dbe9c0a5b6349e9bc792a68693db0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:39,366 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2cc1aa681b2383de5e696eee528d29cc, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:39,369 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7376a37170cc6f0ccc1043ebc65d5dd2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:39,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:39,471 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:39,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 7376a37170cc6f0ccc1043ebc65d5dd2, disabling compactions & flushes 2023-07-21 15:16:39,474 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:39,474 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:39,474 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. after waiting 0 ms 2023-07-21 15:16:39,475 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:39,475 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:39,476 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 7376a37170cc6f0ccc1043ebc65d5dd2: 2023-07-21 15:16:39,476 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3d3a0f300250ece4875b5c9c552e73a1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2cc1aa681b2383de5e696eee528d29cc, disabling compactions & flushes 2023-07-21 15:16:39,479 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. after waiting 0 ms 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:39,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:39,479 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:39,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 627dbe9c0a5b6349e9bc792a68693db0, disabling compactions & flushes 2023-07-21 15:16:39,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2cc1aa681b2383de5e696eee528d29cc: 2023-07-21 15:16:39,481 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:39,482 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:39,482 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0a77f970ea2d7c4eca8b5fbdd49d6571, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:39,482 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. after waiting 0 ms 2023-07-21 15:16:39,482 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:39,482 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:39,482 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 627dbe9c0a5b6349e9bc792a68693db0: 2023-07-21 15:16:39,510 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:39,511 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0a77f970ea2d7c4eca8b5fbdd49d6571, disabling compactions & flushes 2023-07-21 15:16:39,511 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:39,511 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:39,512 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. after waiting 0 ms 2023-07-21 15:16:39,512 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:39,512 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:39,512 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0a77f970ea2d7c4eca8b5fbdd49d6571: 2023-07-21 15:16:39,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:39,898 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:39,898 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 3d3a0f300250ece4875b5c9c552e73a1, disabling compactions & flushes 2023-07-21 15:16:39,898 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:39,899 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:39,899 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. after waiting 0 ms 2023-07-21 15:16:39,899 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:39,899 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:39,899 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 3d3a0f300250ece4875b5c9c552e73a1: 2023-07-21 15:16:39,906 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:39,908 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952599907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952599907"}]},"ts":"1689952599907"} 2023-07-21 15:16:39,908 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952599907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952599907"}]},"ts":"1689952599907"} 2023-07-21 15:16:39,908 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952599907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952599907"}]},"ts":"1689952599907"} 2023-07-21 15:16:39,908 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952599907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952599907"}]},"ts":"1689952599907"} 2023-07-21 15:16:39,909 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952599907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952599907"}]},"ts":"1689952599907"} 2023-07-21 15:16:39,965 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 15:16:39,967 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:39,968 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952599967"}]},"ts":"1689952599967"} 2023-07-21 15:16:39,970 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 15:16:39,974 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:39,974 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:39,974 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:39,974 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:39,974 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, ASSIGN}] 2023-07-21 15:16:39,977 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, ASSIGN 2023-07-21 15:16:39,979 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, ASSIGN 2023-07-21 15:16:39,980 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, ASSIGN 2023-07-21 15:16:39,980 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, ASSIGN 2023-07-21 15:16:39,981 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:39,981 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, ASSIGN 2023-07-21 15:16:39,982 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:39,982 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:39,982 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:39,985 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:40,132 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 15:16:40,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,139 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600139"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952600139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952600139"}]},"ts":"1689952600139"} 2023-07-21 15:16:40,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,140 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:40,141 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952600140"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952600140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952600140"}]},"ts":"1689952600140"} 2023-07-21 15:16:40,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600140"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952600140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952600140"}]},"ts":"1689952600140"} 2023-07-21 15:16:40,140 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,142 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600140"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952600140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952600140"}]},"ts":"1689952600140"} 2023-07-21 15:16:40,140 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:40,142 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952600140"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952600140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952600140"}]},"ts":"1689952600140"} 2023-07-21 15:16:40,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=25, state=RUNNABLE; OpenRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:40,153 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=22, state=RUNNABLE; OpenRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:40,157 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=23, state=RUNNABLE; OpenRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:40,159 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=24, state=RUNNABLE; OpenRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:40,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=26, state=RUNNABLE; OpenRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:40,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:40,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:40,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 627dbe9c0a5b6349e9bc792a68693db0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 15:16:40,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:40,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,320 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:40,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2cc1aa681b2383de5e696eee528d29cc, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 15:16:40,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,326 DEBUG [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/f 2023-07-21 15:16:40,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:40,326 DEBUG [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/f 2023-07-21 15:16:40,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,327 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 627dbe9c0a5b6349e9bc792a68693db0 columnFamilyName f 2023-07-21 15:16:40,328 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] regionserver.HStore(310): Store=627dbe9c0a5b6349e9bc792a68693db0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:40,328 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,337 DEBUG [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/f 2023-07-21 15:16:40,338 DEBUG [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/f 2023-07-21 15:16:40,339 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2cc1aa681b2383de5e696eee528d29cc columnFamilyName f 2023-07-21 15:16:40,340 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] regionserver.HStore(310): Store=2cc1aa681b2383de5e696eee528d29cc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:40,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:40,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:40,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 627dbe9c0a5b6349e9bc792a68693db0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11219343360, jitterRate=0.044882774353027344}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:40,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 627dbe9c0a5b6349e9bc792a68693db0: 2023-07-21 15:16:40,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0., pid=28, masterSystemTime=1689952600306 2023-07-21 15:16:40,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:40,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:40,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:40,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7376a37170cc6f0ccc1043ebc65d5dd2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 15:16:40,369 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,369 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952600369"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952600369"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952600369"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952600369"}]},"ts":"1689952600369"} 2023-07-21 15:16:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=22 2023-07-21 15:16:40,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=22, state=SUCCESS; OpenRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,46091,1689952592464 in 220 msec 2023-07-21 15:16:40,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:40,388 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, ASSIGN in 403 msec 2023-07-21 15:16:40,388 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2cc1aa681b2383de5e696eee528d29cc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10009990240, jitterRate=-0.06774701178073883}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:40,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2cc1aa681b2383de5e696eee528d29cc: 2023-07-21 15:16:40,388 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc., pid=29, masterSystemTime=1689952600315 2023-07-21 15:16:40,391 DEBUG [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/f 2023-07-21 15:16:40,391 DEBUG [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/f 2023-07-21 15:16:40,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:40,392 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7376a37170cc6f0ccc1043ebc65d5dd2 columnFamilyName f 2023-07-21 15:16:40,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:40,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:40,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a77f970ea2d7c4eca8b5fbdd49d6571, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 15:16:40,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:40,393 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] regionserver.HStore(310): Store=7376a37170cc6f0ccc1043ebc65d5dd2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:40,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,394 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:40,394 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600394"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952600394"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952600394"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952600394"}]},"ts":"1689952600394"} 2023-07-21 15:16:40,395 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,399 DEBUG [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/f 2023-07-21 15:16:40,399 DEBUG [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/f 2023-07-21 15:16:40,399 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a77f970ea2d7c4eca8b5fbdd49d6571 columnFamilyName f 2023-07-21 15:16:40,400 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] regionserver.HStore(310): Store=0a77f970ea2d7c4eca8b5fbdd49d6571/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:40,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=23 2023-07-21 15:16:40,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:40,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=23, state=SUCCESS; OpenRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,43323,1689952592244 in 242 msec 2023-07-21 15:16:40,410 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, ASSIGN in 433 msec 2023-07-21 15:16:40,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:40,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:40,412 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7376a37170cc6f0ccc1043ebc65d5dd2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11812565280, jitterRate=0.10013087093830109}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:40,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7376a37170cc6f0ccc1043ebc65d5dd2: 2023-07-21 15:16:40,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2., pid=30, masterSystemTime=1689952600306 2023-07-21 15:16:40,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:40,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:40,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:40,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:40,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3d3a0f300250ece4875b5c9c552e73a1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 15:16:40,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,417 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0a77f970ea2d7c4eca8b5fbdd49d6571; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10915536960, jitterRate=0.016588598489761353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:40,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:40,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0a77f970ea2d7c4eca8b5fbdd49d6571: 2023-07-21 15:16:40,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600417"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952600417"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952600417"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952600417"}]},"ts":"1689952600417"} 2023-07-21 15:16:40,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571., pid=31, masterSystemTime=1689952600315 2023-07-21 15:16:40,419 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:40,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:40,422 DEBUG [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/f 2023-07-21 15:16:40,422 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:40,423 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952600422"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952600422"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952600422"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952600422"}]},"ts":"1689952600422"} 2023-07-21 15:16:40,424 DEBUG [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/f 2023-07-21 15:16:40,425 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3d3a0f300250ece4875b5c9c552e73a1 columnFamilyName f 2023-07-21 15:16:40,427 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] regionserver.HStore(310): Store=3d3a0f300250ece4875b5c9c552e73a1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:40,427 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=24 2023-07-21 15:16:40,427 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=24, state=SUCCESS; OpenRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,46091,1689952592464 in 261 msec 2023-07-21 15:16:40,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,429 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=26 2023-07-21 15:16:40,429 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, ASSIGN in 453 msec 2023-07-21 15:16:40,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,430 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=26, state=SUCCESS; OpenRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,43323,1689952592244 in 266 msec 2023-07-21 15:16:40,432 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, ASSIGN in 455 msec 2023-07-21 15:16:40,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:40,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:40,454 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 3d3a0f300250ece4875b5c9c552e73a1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11222701440, jitterRate=0.04519551992416382}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:40,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 3d3a0f300250ece4875b5c9c552e73a1: 2023-07-21 15:16:40,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1., pid=27, masterSystemTime=1689952600306 2023-07-21 15:16:40,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:40,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:40,459 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:40,459 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952600459"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952600459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952600459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952600459"}]},"ts":"1689952600459"} 2023-07-21 15:16:40,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=25 2023-07-21 15:16:40,469 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=25, state=SUCCESS; OpenRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,46091,1689952592464 in 321 msec 2023-07-21 15:16:40,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-21 15:16:40,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, ASSIGN in 495 msec 2023-07-21 15:16:40,475 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:40,475 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952600475"}]},"ts":"1689952600475"} 2023-07-21 15:16:40,477 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 15:16:40,480 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:40,484 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.3880 sec 2023-07-21 15:16:40,626 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:40,710 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:16:40,710 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 15:16:40,711 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:40,711 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 15:16:40,711 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:16:40,711 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 15:16:40,712 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-21 15:16:41,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:16:41,237 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-21 15:16:41,237 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 15:16:41,238 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:41,240 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37121] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 136.243.18.41:60302 deadline: 1689952661240, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=16. 2023-07-21 15:16:41,345 DEBUG [hconnection-0x5a0ffa86-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:41,348 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:32808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:41,360 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 15:16:41,361 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:41,361 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 15:16:41,361 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:41,366 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:41,369 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55722, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:41,372 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:41,377 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:41,378 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:41,380 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:41,382 DEBUG [Listener at localhost.localdomain/34137] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:41,384 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:32824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:41,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:41,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:41,398 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:41,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:41,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:41,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 627dbe9c0a5b6349e9bc792a68693db0 to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:41,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:41,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:41,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:41,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:41,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, REOPEN/MOVE 2023-07-21 15:16:41,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 2cc1aa681b2383de5e696eee528d29cc to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,419 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, REOPEN/MOVE 2023-07-21 15:16:41,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:41,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:41,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:41,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:41,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:41,421 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:41,421 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601421"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601421"}]},"ts":"1689952601421"} 2023-07-21 15:16:41,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, REOPEN/MOVE 2023-07-21 15:16:41,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 7376a37170cc6f0ccc1043ebc65d5dd2 to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,425 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, REOPEN/MOVE 2023-07-21 15:16:41,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:41,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:41,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:41,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:41,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:41,426 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:41,426 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601426"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601426"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601426"}]},"ts":"1689952601426"} 2023-07-21 15:16:41,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, REOPEN/MOVE 2023-07-21 15:16:41,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 3d3a0f300250ece4875b5c9c552e73a1 to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,427 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, REOPEN/MOVE 2023-07-21 15:16:41,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:41,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:41,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:41,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:41,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:41,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:41,429 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:41,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:41,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, REOPEN/MOVE 2023-07-21 15:16:41,430 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601429"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601429"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601429"}]},"ts":"1689952601429"} 2023-07-21 15:16:41,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 0a77f970ea2d7c4eca8b5fbdd49d6571 to RSGroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:41,431 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, REOPEN/MOVE 2023-07-21 15:16:41,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:41,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:41,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:41,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:41,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:41,432 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:41,433 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601432"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601432"}]},"ts":"1689952601432"} 2023-07-21 15:16:41,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, REOPEN/MOVE 2023-07-21 15:16:41,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1456384549, current retry=0 2023-07-21 15:16:41,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=34, state=RUNNABLE; CloseRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:41,434 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, REOPEN/MOVE 2023-07-21 15:16:41,436 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:41,436 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601436"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601436"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601436"}]},"ts":"1689952601436"} 2023-07-21 15:16:41,436 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=36, state=RUNNABLE; CloseRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:41,440 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=38, state=RUNNABLE; CloseRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:41,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:41,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 627dbe9c0a5b6349e9bc792a68693db0, disabling compactions & flushes 2023-07-21 15:16:41,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:41,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:41,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. after waiting 0 ms 2023-07-21 15:16:41,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:41,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0a77f970ea2d7c4eca8b5fbdd49d6571, disabling compactions & flushes 2023-07-21 15:16:41,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:41,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:41,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. after waiting 0 ms 2023-07-21 15:16:41,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:41,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:41,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:41,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:41,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0a77f970ea2d7c4eca8b5fbdd49d6571: 2023-07-21 15:16:41,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 0a77f970ea2d7c4eca8b5fbdd49d6571 move to jenkins-hbase17.apache.org,37121,1689952592049 record at close sequenceid=2 2023-07-21 15:16:41,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:41,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 627dbe9c0a5b6349e9bc792a68693db0: 2023-07-21 15:16:41,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 627dbe9c0a5b6349e9bc792a68693db0 move to jenkins-hbase17.apache.org,37121,1689952592049 record at close sequenceid=2 2023-07-21 15:16:41,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:41,618 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=CLOSED 2023-07-21 15:16:41,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,620 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601618"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952601618"}]},"ts":"1689952601618"} 2023-07-21 15:16:41,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7376a37170cc6f0ccc1043ebc65d5dd2, disabling compactions & flushes 2023-07-21 15:16:41,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. after waiting 0 ms 2023-07-21 15:16:41,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2cc1aa681b2383de5e696eee528d29cc, disabling compactions & flushes 2023-07-21 15:16:41,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. after waiting 0 ms 2023-07-21 15:16:41,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,623 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=CLOSED 2023-07-21 15:16:41,624 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952601623"}]},"ts":"1689952601623"} 2023-07-21 15:16:41,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:41,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-21 15:16:41,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,46091,1689952592464 in 197 msec 2023-07-21 15:16:41,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7376a37170cc6f0ccc1043ebc65d5dd2: 2023-07-21 15:16:41,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 7376a37170cc6f0ccc1043ebc65d5dd2 move to jenkins-hbase17.apache.org,37121,1689952592049 record at close sequenceid=2 2023-07-21 15:16:41,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:41,633 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:41,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2cc1aa681b2383de5e696eee528d29cc: 2023-07-21 15:16:41,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 2cc1aa681b2383de5e696eee528d29cc move to jenkins-hbase17.apache.org,41557,1689952596371 record at close sequenceid=2 2023-07-21 15:16:41,635 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-21 15:16:41,635 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; CloseRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,43323,1689952592244 in 187 msec 2023-07-21 15:16:41,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 3d3a0f300250ece4875b5c9c552e73a1, disabling compactions & flushes 2023-07-21 15:16:41,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. after waiting 0 ms 2023-07-21 15:16:41,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,637 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:41,637 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=CLOSED 2023-07-21 15:16:41,638 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952601637"}]},"ts":"1689952601637"} 2023-07-21 15:16:41,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,639 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=CLOSED 2023-07-21 15:16:41,639 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601639"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952601639"}]},"ts":"1689952601639"} 2023-07-21 15:16:41,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=34 2023-07-21 15:16:41,644 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=34, state=SUCCESS; CloseRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,46091,1689952592464 in 206 msec 2023-07-21 15:16:41,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-21 15:16:41,645 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:41,645 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,43323,1689952592244 in 212 msec 2023-07-21 15:16:41,646 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:41,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:41,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 3d3a0f300250ece4875b5c9c552e73a1: 2023-07-21 15:16:41,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 3d3a0f300250ece4875b5c9c552e73a1 move to jenkins-hbase17.apache.org,41557,1689952596371 record at close sequenceid=2 2023-07-21 15:16:41,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,654 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=CLOSED 2023-07-21 15:16:41,654 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601654"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952601654"}]},"ts":"1689952601654"} 2023-07-21 15:16:41,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=36 2023-07-21 15:16:41,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=36, state=SUCCESS; CloseRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,46091,1689952592464 in 220 msec 2023-07-21 15:16:41,660 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:41,783 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 15:16:41,784 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:41,784 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:41,784 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:41,784 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601783"}]},"ts":"1689952601783"} 2023-07-21 15:16:41,784 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601784"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601784"}]},"ts":"1689952601784"} 2023-07-21 15:16:41,784 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:41,784 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601783"}]},"ts":"1689952601783"} 2023-07-21 15:16:41,784 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601783"}]},"ts":"1689952601783"} 2023-07-21 15:16:41,784 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:41,785 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952601783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952601783"}]},"ts":"1689952601783"} 2023-07-21 15:16:41,786 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=36, state=RUNNABLE; OpenRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:41,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=32, state=RUNNABLE; OpenRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:41,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=34, state=RUNNABLE; OpenRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:41,791 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=38, state=RUNNABLE; OpenRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:41,801 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=33, state=RUNNABLE; OpenRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:41,944 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:41,944 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:41,946 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:41,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7376a37170cc6f0ccc1043ebc65d5dd2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 15:16:41,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3d3a0f300250ece4875b5c9c552e73a1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:41,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,952 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,953 DEBUG [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/f 2023-07-21 15:16:41,953 DEBUG [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/f 2023-07-21 15:16:41,953 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,953 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3d3a0f300250ece4875b5c9c552e73a1 columnFamilyName f 2023-07-21 15:16:41,954 INFO [StoreOpener-3d3a0f300250ece4875b5c9c552e73a1-1] regionserver.HStore(310): Store=3d3a0f300250ece4875b5c9c552e73a1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:41,955 DEBUG [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/f 2023-07-21 15:16:41,955 DEBUG [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/f 2023-07-21 15:16:41,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,955 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7376a37170cc6f0ccc1043ebc65d5dd2 columnFamilyName f 2023-07-21 15:16:41,956 INFO [StoreOpener-7376a37170cc6f0ccc1043ebc65d5dd2-1] regionserver.HStore(310): Store=7376a37170cc6f0ccc1043ebc65d5dd2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:41,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:41,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:41,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 3d3a0f300250ece4875b5c9c552e73a1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10534028640, jitterRate=-0.018942132592201233}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:41,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 3d3a0f300250ece4875b5c9c552e73a1: 2023-07-21 15:16:41,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1., pid=42, masterSystemTime=1689952601944 2023-07-21 15:16:41,965 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7376a37170cc6f0ccc1043ebc65d5dd2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10479149920, jitterRate=-0.024053111672401428}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:41,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7376a37170cc6f0ccc1043ebc65d5dd2: 2023-07-21 15:16:41,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2., pid=44, masterSystemTime=1689952601944 2023-07-21 15:16:41,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:41,968 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:41,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,969 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601968"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952601968"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952601968"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952601968"}]},"ts":"1689952601968"} 2023-07-21 15:16:41,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2cc1aa681b2383de5e696eee528d29cc, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 15:16:41,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:41,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:41,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:41,971 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:41,971 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601971"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952601971"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952601971"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952601971"}]},"ts":"1689952601971"} 2023-07-21 15:16:41,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a77f970ea2d7c4eca8b5fbdd49d6571, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 15:16:41,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:41,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,974 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=36 2023-07-21 15:16:41,975 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=36, state=SUCCESS; OpenRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,41557,1689952596371 in 185 msec 2023-07-21 15:16:41,977 DEBUG [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/f 2023-07-21 15:16:41,977 DEBUG [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/f 2023-07-21 15:16:41,977 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, REOPEN/MOVE in 547 msec 2023-07-21 15:16:41,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=34 2023-07-21 15:16:41,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=34, state=SUCCESS; OpenRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,37121,1689952592049 in 185 msec 2023-07-21 15:16:41,979 DEBUG [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/f 2023-07-21 15:16:41,979 DEBUG [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/f 2023-07-21 15:16:41,979 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a77f970ea2d7c4eca8b5fbdd49d6571 columnFamilyName f 2023-07-21 15:16:41,979 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2cc1aa681b2383de5e696eee528d29cc columnFamilyName f 2023-07-21 15:16:41,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, REOPEN/MOVE in 553 msec 2023-07-21 15:16:41,980 INFO [StoreOpener-2cc1aa681b2383de5e696eee528d29cc-1] regionserver.HStore(310): Store=2cc1aa681b2383de5e696eee528d29cc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:41,980 INFO [StoreOpener-0a77f970ea2d7c4eca8b5fbdd49d6571-1] regionserver.HStore(310): Store=0a77f970ea2d7c4eca8b5fbdd49d6571/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:41,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:41,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:41,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2cc1aa681b2383de5e696eee528d29cc; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10286432640, jitterRate=-0.042001307010650635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:41,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0a77f970ea2d7c4eca8b5fbdd49d6571; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10347941280, jitterRate=-0.036272868514060974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:41,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2cc1aa681b2383de5e696eee528d29cc: 2023-07-21 15:16:41,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0a77f970ea2d7c4eca8b5fbdd49d6571: 2023-07-21 15:16:41,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc., pid=46, masterSystemTime=1689952601944 2023-07-21 15:16:41,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571., pid=45, masterSystemTime=1689952601944 2023-07-21 15:16:41,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:41,997 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:41,997 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952601997"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952601997"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952601997"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952601997"}]},"ts":"1689952601997"} 2023-07-21 15:16:41,999 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:41,999 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952601999"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952601999"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952601999"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952601999"}]},"ts":"1689952601999"} 2023-07-21 15:16:42,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 627dbe9c0a5b6349e9bc792a68693db0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 15:16:42,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:42,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=33 2023-07-21 15:16:42,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=33, state=SUCCESS; OpenRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,41557,1689952596371 in 199 msec 2023-07-21 15:16:42,006 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=38 2023-07-21 15:16:42,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=38, state=SUCCESS; OpenRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,37121,1689952592049 in 211 msec 2023-07-21 15:16:42,008 DEBUG [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/f 2023-07-21 15:16:42,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, REOPEN/MOVE in 585 msec 2023-07-21 15:16:42,008 DEBUG [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/f 2023-07-21 15:16:42,009 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, REOPEN/MOVE in 575 msec 2023-07-21 15:16:42,009 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 627dbe9c0a5b6349e9bc792a68693db0 columnFamilyName f 2023-07-21 15:16:42,010 INFO [StoreOpener-627dbe9c0a5b6349e9bc792a68693db0-1] regionserver.HStore(310): Store=627dbe9c0a5b6349e9bc792a68693db0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:42,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 627dbe9c0a5b6349e9bc792a68693db0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10999187200, jitterRate=0.02437913417816162}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:42,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 627dbe9c0a5b6349e9bc792a68693db0: 2023-07-21 15:16:42,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0., pid=43, masterSystemTime=1689952601944 2023-07-21 15:16:42,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,043 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:42,043 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952602043"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952602043"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952602043"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952602043"}]},"ts":"1689952602043"} 2023-07-21 15:16:42,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=32 2023-07-21 15:16:42,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=32, state=SUCCESS; OpenRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,37121,1689952592049 in 257 msec 2023-07-21 15:16:42,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, REOPEN/MOVE in 633 msec 2023-07-21 15:16:42,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-21 15:16:42,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1456384549. 2023-07-21 15:16:42,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:42,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:42,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:42,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:42,453 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:42,463 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,493 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952602492"}]},"ts":"1689952602492"} 2023-07-21 15:16:42,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-21 15:16:42,495 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 15:16:42,503 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 15:16:42,509 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, UNASSIGN}] 2023-07-21 15:16:42,512 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, UNASSIGN 2023-07-21 15:16:42,513 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, UNASSIGN 2023-07-21 15:16:42,521 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:42,521 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:42,522 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602521"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952602521"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952602521"}]},"ts":"1689952602521"} 2023-07-21 15:16:42,522 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952602521"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952602521"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952602521"}]},"ts":"1689952602521"} 2023-07-21 15:16:42,524 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, UNASSIGN 2023-07-21 15:16:42,525 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, UNASSIGN 2023-07-21 15:16:42,526 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, UNASSIGN 2023-07-21 15:16:42,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:42,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=48, state=RUNNABLE; CloseRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:42,528 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:42,528 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602528"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952602528"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952602528"}]},"ts":"1689952602528"} 2023-07-21 15:16:42,528 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:42,529 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952602528"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952602528"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952602528"}]},"ts":"1689952602528"} 2023-07-21 15:16:42,533 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:42,533 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602533"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952602533"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952602533"}]},"ts":"1689952602533"} 2023-07-21 15:16:42,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=50, state=RUNNABLE; CloseRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:42,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=52, state=RUNNABLE; CloseRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:42,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=51, state=RUNNABLE; CloseRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:42,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-21 15:16:42,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:42,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 3d3a0f300250ece4875b5c9c552e73a1, disabling compactions & flushes 2023-07-21 15:16:42,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:42,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:42,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. after waiting 0 ms 2023-07-21 15:16:42,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:42,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:42,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0a77f970ea2d7c4eca8b5fbdd49d6571, disabling compactions & flushes 2023-07-21 15:16:42,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. after waiting 0 ms 2023-07-21 15:16:42,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:42,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:42,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571. 2023-07-21 15:16:42,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0a77f970ea2d7c4eca8b5fbdd49d6571: 2023-07-21 15:16:42,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1. 2023-07-21 15:16:42,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 3d3a0f300250ece4875b5c9c552e73a1: 2023-07-21 15:16:42,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:42,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 627dbe9c0a5b6349e9bc792a68693db0, disabling compactions & flushes 2023-07-21 15:16:42,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. after waiting 0 ms 2023-07-21 15:16:42,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,707 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=0a77f970ea2d7c4eca8b5fbdd49d6571, regionState=CLOSED 2023-07-21 15:16:42,707 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952602707"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952602707"}]},"ts":"1689952602707"} 2023-07-21 15:16:42,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:42,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:42,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2cc1aa681b2383de5e696eee528d29cc, disabling compactions & flushes 2023-07-21 15:16:42,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:42,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:42,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. after waiting 0 ms 2023-07-21 15:16:42,710 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=3d3a0f300250ece4875b5c9c552e73a1, regionState=CLOSED 2023-07-21 15:16:42,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:42,710 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602710"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952602710"}]},"ts":"1689952602710"} 2023-07-21 15:16:42,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:42,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0. 2023-07-21 15:16:42,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 627dbe9c0a5b6349e9bc792a68693db0: 2023-07-21 15:16:42,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:42,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc. 2023-07-21 15:16:42,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2cc1aa681b2383de5e696eee528d29cc: 2023-07-21 15:16:42,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:42,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7376a37170cc6f0ccc1043ebc65d5dd2, disabling compactions & flushes 2023-07-21 15:16:42,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:42,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:42,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. after waiting 0 ms 2023-07-21 15:16:42,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:42,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-21 15:16:42,719 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=627dbe9c0a5b6349e9bc792a68693db0, regionState=CLOSED 2023-07-21 15:16:42,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; CloseRegionProcedure 0a77f970ea2d7c4eca8b5fbdd49d6571, server=jenkins-hbase17.apache.org,37121,1689952592049 in 176 msec 2023-07-21 15:16:42,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952602718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952602718"}]},"ts":"1689952602718"} 2023-07-21 15:16:42,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=51 2023-07-21 15:16:42,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=51, state=SUCCESS; CloseRegionProcedure 3d3a0f300250ece4875b5c9c552e73a1, server=jenkins-hbase17.apache.org,41557,1689952596371 in 176 msec 2023-07-21 15:16:42,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:42,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a77f970ea2d7c4eca8b5fbdd49d6571, UNASSIGN in 210 msec 2023-07-21 15:16:42,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:42,727 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=2cc1aa681b2383de5e696eee528d29cc, regionState=CLOSED 2023-07-21 15:16:42,727 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952602727"}]},"ts":"1689952602727"} 2023-07-21 15:16:42,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=48 2023-07-21 15:16:42,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; CloseRegionProcedure 627dbe9c0a5b6349e9bc792a68693db0, server=jenkins-hbase17.apache.org,37121,1689952592049 in 194 msec 2023-07-21 15:16:42,727 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3d3a0f300250ece4875b5c9c552e73a1, UNASSIGN in 214 msec 2023-07-21 15:16:42,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2. 2023-07-21 15:16:42,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7376a37170cc6f0ccc1043ebc65d5dd2: 2023-07-21 15:16:42,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=627dbe9c0a5b6349e9bc792a68693db0, UNASSIGN in 223 msec 2023-07-21 15:16:42,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:42,731 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7376a37170cc6f0ccc1043ebc65d5dd2, regionState=CLOSED 2023-07-21 15:16:42,732 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952602731"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952602731"}]},"ts":"1689952602731"} 2023-07-21 15:16:42,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-21 15:16:42,733 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure 2cc1aa681b2383de5e696eee528d29cc, server=jenkins-hbase17.apache.org,41557,1689952596371 in 204 msec 2023-07-21 15:16:42,735 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2cc1aa681b2383de5e696eee528d29cc, UNASSIGN in 228 msec 2023-07-21 15:16:42,736 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=50 2023-07-21 15:16:42,736 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=50, state=SUCCESS; CloseRegionProcedure 7376a37170cc6f0ccc1043ebc65d5dd2, server=jenkins-hbase17.apache.org,37121,1689952592049 in 200 msec 2023-07-21 15:16:42,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=47 2023-07-21 15:16:42,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7376a37170cc6f0ccc1043ebc65d5dd2, UNASSIGN in 231 msec 2023-07-21 15:16:42,739 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952602739"}]},"ts":"1689952602739"} 2023-07-21 15:16:42,741 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 15:16:42,742 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 15:16:42,746 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 270 msec 2023-07-21 15:16:42,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-21 15:16:42,800 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-21 15:16:42,802 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$6(2260): Client=jenkins//136.243.18.41 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:42,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 15:16:42,816 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 15:16:42,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-21 15:16:42,829 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:42,829 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:42,829 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,829 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:42,829 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:42,833 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits] 2023-07-21 15:16:42,833 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits] 2023-07-21 15:16:42,833 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits] 2023-07-21 15:16:42,833 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits] 2023-07-21 15:16:42,833 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits] 2023-07-21 15:16:42,848 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571/recovered.edits/7.seqid 2023-07-21 15:16:42,848 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc/recovered.edits/7.seqid 2023-07-21 15:16:42,848 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1/recovered.edits/7.seqid 2023-07-21 15:16:42,849 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2/recovered.edits/7.seqid 2023-07-21 15:16:42,849 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0/recovered.edits/7.seqid 2023-07-21 15:16:42,850 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2cc1aa681b2383de5e696eee528d29cc 2023-07-21 15:16:42,850 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a77f970ea2d7c4eca8b5fbdd49d6571 2023-07-21 15:16:42,850 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3d3a0f300250ece4875b5c9c552e73a1 2023-07-21 15:16:42,850 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7376a37170cc6f0ccc1043ebc65d5dd2 2023-07-21 15:16:42,851 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/627dbe9c0a5b6349e9bc792a68693db0 2023-07-21 15:16:42,851 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 15:16:42,887 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 15:16:42,890 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 15:16:42,891 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 15:16:42,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952602891"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952602891"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952602891"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952602891"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952602891"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,895 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 15:16:42,895 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 627dbe9c0a5b6349e9bc792a68693db0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952599089.627dbe9c0a5b6349e9bc792a68693db0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2cc1aa681b2383de5e696eee528d29cc, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952599089.2cc1aa681b2383de5e696eee528d29cc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7376a37170cc6f0ccc1043ebc65d5dd2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952599089.7376a37170cc6f0ccc1043ebc65d5dd2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3d3a0f300250ece4875b5c9c552e73a1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952599089.3d3a0f300250ece4875b5c9c552e73a1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 0a77f970ea2d7c4eca8b5fbdd49d6571, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952599089.0a77f970ea2d7c4eca8b5fbdd49d6571.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 15:16:42,895 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 15:16:42,896 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952602895"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:42,898 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 15:16:42,905 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:42,905 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:42,905 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:42,905 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:42,906 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:42,907 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb empty. 2023-07-21 15:16:42,907 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff empty. 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 empty. 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade empty. 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 empty. 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:42,908 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:42,909 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:42,909 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:42,909 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 15:16:42,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-21 15:16:42,930 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:42,933 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => cdeccce8268f9e96444d8cb374b0c0cb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:42,933 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 948fc64543b80de4c1db174f2ba83ade, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:42,933 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => af02bfc4822d3fb75cd3910e9cc589ff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:42,984 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 948fc64543b80de4c1db174f2ba83ade, disabling compactions & flushes 2023-07-21 15:16:42,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. after waiting 0 ms 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:42,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 948fc64543b80de4c1db174f2ba83ade: 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:42,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing cdeccce8268f9e96444d8cb374b0c0cb, disabling compactions & flushes 2023-07-21 15:16:42,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3f9d36c14432eb6357ba72be58fa2480, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:42,985 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:42,986 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:42,986 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. after waiting 0 ms 2023-07-21 15:16:42,986 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:42,986 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:42,986 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for cdeccce8268f9e96444d8cb374b0c0cb: 2023-07-21 15:16:42,986 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4d061bc934b42acd61790a0e38a84cb5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:42,986 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:42,987 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing af02bfc4822d3fb75cd3910e9cc589ff, disabling compactions & flushes 2023-07-21 15:16:42,987 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:42,987 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:42,987 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. after waiting 0 ms 2023-07-21 15:16:42,987 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:42,987 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:42,987 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for af02bfc4822d3fb75cd3910e9cc589ff: 2023-07-21 15:16:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 4d061bc934b42acd61790a0e38a84cb5, disabling compactions & flushes 2023-07-21 15:16:43,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. after waiting 0 ms 2023-07-21 15:16:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 4d061bc934b42acd61790a0e38a84cb5: 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 3f9d36c14432eb6357ba72be58fa2480, disabling compactions & flushes 2023-07-21 15:16:43,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. after waiting 0 ms 2023-07-21 15:16:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,009 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,009 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 3f9d36c14432eb6357ba72be58fa2480: 2023-07-21 15:16:43,013 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952603013"}]},"ts":"1689952603013"} 2023-07-21 15:16:43,013 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952603013"}]},"ts":"1689952603013"} 2023-07-21 15:16:43,013 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952603013"}]},"ts":"1689952603013"} 2023-07-21 15:16:43,013 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952603013"}]},"ts":"1689952603013"} 2023-07-21 15:16:43,013 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952603013"}]},"ts":"1689952603013"} 2023-07-21 15:16:43,016 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 15:16:43,017 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952603017"}]},"ts":"1689952603017"} 2023-07-21 15:16:43,020 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 15:16:43,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:43,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:43,024 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:43,024 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:43,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, ASSIGN}] 2023-07-21 15:16:43,027 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, ASSIGN 2023-07-21 15:16:43,027 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, ASSIGN 2023-07-21 15:16:43,027 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, ASSIGN 2023-07-21 15:16:43,027 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, ASSIGN 2023-07-21 15:16:43,027 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, ASSIGN 2023-07-21 15:16:43,028 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:43,028 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:43,029 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:43,029 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:43,028 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:43,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-21 15:16:43,179 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 15:16:43,182 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=3f9d36c14432eb6357ba72be58fa2480, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,182 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=4d061bc934b42acd61790a0e38a84cb5, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,182 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=948fc64543b80de4c1db174f2ba83ade, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,182 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=cdeccce8268f9e96444d8cb374b0c0cb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,182 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603182"}]},"ts":"1689952603182"} 2023-07-21 15:16:43,182 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=af02bfc4822d3fb75cd3910e9cc589ff, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,183 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603182"}]},"ts":"1689952603182"} 2023-07-21 15:16:43,183 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603182"}]},"ts":"1689952603182"} 2023-07-21 15:16:43,182 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603182"}]},"ts":"1689952603182"} 2023-07-21 15:16:43,182 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603182"}]},"ts":"1689952603182"} 2023-07-21 15:16:43,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=61, state=RUNNABLE; OpenRegionProcedure 948fc64543b80de4c1db174f2ba83ade, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:43,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure cdeccce8268f9e96444d8cb374b0c0cb, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:43,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=60, state=RUNNABLE; OpenRegionProcedure af02bfc4822d3fb75cd3910e9cc589ff, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:43,198 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=63, state=RUNNABLE; OpenRegionProcedure 4d061bc934b42acd61790a0e38a84cb5, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:43,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; OpenRegionProcedure 3f9d36c14432eb6357ba72be58fa2480, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:43,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3f9d36c14432eb6357ba72be58fa2480, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 15:16:43,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:43,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdeccce8268f9e96444d8cb374b0c0cb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 15:16:43,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,343 INFO [StoreOpener-3f9d36c14432eb6357ba72be58fa2480-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,344 INFO [StoreOpener-cdeccce8268f9e96444d8cb374b0c0cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,345 DEBUG [StoreOpener-3f9d36c14432eb6357ba72be58fa2480-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/f 2023-07-21 15:16:43,345 DEBUG [StoreOpener-3f9d36c14432eb6357ba72be58fa2480-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/f 2023-07-21 15:16:43,345 DEBUG [StoreOpener-cdeccce8268f9e96444d8cb374b0c0cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/f 2023-07-21 15:16:43,345 DEBUG [StoreOpener-cdeccce8268f9e96444d8cb374b0c0cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/f 2023-07-21 15:16:43,345 INFO [StoreOpener-cdeccce8268f9e96444d8cb374b0c0cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdeccce8268f9e96444d8cb374b0c0cb columnFamilyName f 2023-07-21 15:16:43,345 INFO [StoreOpener-3f9d36c14432eb6357ba72be58fa2480-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3f9d36c14432eb6357ba72be58fa2480 columnFamilyName f 2023-07-21 15:16:43,346 INFO [StoreOpener-cdeccce8268f9e96444d8cb374b0c0cb-1] regionserver.HStore(310): Store=cdeccce8268f9e96444d8cb374b0c0cb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:43,346 INFO [StoreOpener-3f9d36c14432eb6357ba72be58fa2480-1] regionserver.HStore(310): Store=3f9d36c14432eb6357ba72be58fa2480/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:43,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:43,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:43,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:43,381 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened cdeccce8268f9e96444d8cb374b0c0cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9968986240, jitterRate=-0.07156580686569214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:43,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for cdeccce8268f9e96444d8cb374b0c0cb: 2023-07-21 15:16:43,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:43,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 3f9d36c14432eb6357ba72be58fa2480; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9659778880, jitterRate=-0.10036298632621765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:43,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 3f9d36c14432eb6357ba72be58fa2480: 2023-07-21 15:16:43,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb., pid=65, masterSystemTime=1689952603339 2023-07-21 15:16:43,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480., pid=68, masterSystemTime=1689952603337 2023-07-21 15:16:43,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:43,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:43,387 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4d061bc934b42acd61790a0e38a84cb5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 15:16:43,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,388 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=cdeccce8268f9e96444d8cb374b0c0cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,389 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603388"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952603388"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952603388"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952603388"}]},"ts":"1689952603388"} 2023-07-21 15:16:43,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:43,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:43,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 948fc64543b80de4c1db174f2ba83ade, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 15:16:43,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,397 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=3f9d36c14432eb6357ba72be58fa2480, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,398 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603397"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952603397"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952603397"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952603397"}]},"ts":"1689952603397"} 2023-07-21 15:16:43,400 INFO [StoreOpener-4d061bc934b42acd61790a0e38a84cb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,402 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-21 15:16:43,402 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure cdeccce8268f9e96444d8cb374b0c0cb, server=jenkins-hbase17.apache.org,41557,1689952596371 in 207 msec 2023-07-21 15:16:43,403 INFO [StoreOpener-948fc64543b80de4c1db174f2ba83ade-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,404 DEBUG [StoreOpener-4d061bc934b42acd61790a0e38a84cb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/f 2023-07-21 15:16:43,404 DEBUG [StoreOpener-4d061bc934b42acd61790a0e38a84cb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/f 2023-07-21 15:16:43,405 INFO [StoreOpener-4d061bc934b42acd61790a0e38a84cb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4d061bc934b42acd61790a0e38a84cb5 columnFamilyName f 2023-07-21 15:16:43,406 DEBUG [StoreOpener-948fc64543b80de4c1db174f2ba83ade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/f 2023-07-21 15:16:43,406 DEBUG [StoreOpener-948fc64543b80de4c1db174f2ba83ade-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/f 2023-07-21 15:16:43,406 INFO [StoreOpener-4d061bc934b42acd61790a0e38a84cb5-1] regionserver.HStore(310): Store=4d061bc934b42acd61790a0e38a84cb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:43,407 INFO [StoreOpener-948fc64543b80de4c1db174f2ba83ade-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 948fc64543b80de4c1db174f2ba83ade columnFamilyName f 2023-07-21 15:16:43,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,409 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, ASSIGN in 378 msec 2023-07-21 15:16:43,409 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-21 15:16:43,409 INFO [StoreOpener-948fc64543b80de4c1db174f2ba83ade-1] regionserver.HStore(310): Store=948fc64543b80de4c1db174f2ba83ade/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:43,409 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; OpenRegionProcedure 3f9d36c14432eb6357ba72be58fa2480, server=jenkins-hbase17.apache.org,37121,1689952592049 in 211 msec 2023-07-21 15:16:43,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:43,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:43,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, ASSIGN in 385 msec 2023-07-21 15:16:43,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-21 15:16:43,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:43,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:43,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 948fc64543b80de4c1db174f2ba83ade; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11145138880, jitterRate=0.037971943616867065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:43,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 948fc64543b80de4c1db174f2ba83ade: 2023-07-21 15:16:43,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 4d061bc934b42acd61790a0e38a84cb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10294111680, jitterRate=-0.04128614068031311}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:43,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 4d061bc934b42acd61790a0e38a84cb5: 2023-07-21 15:16:43,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5., pid=67, masterSystemTime=1689952603339 2023-07-21 15:16:43,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade., pid=64, masterSystemTime=1689952603337 2023-07-21 15:16:43,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:43,439 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=4d061bc934b42acd61790a0e38a84cb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:43,439 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603439"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952603439"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952603439"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952603439"}]},"ts":"1689952603439"} 2023-07-21 15:16:43,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:43,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:43,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => af02bfc4822d3fb75cd3910e9cc589ff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 15:16:43,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:43,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,441 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=948fc64543b80de4c1db174f2ba83ade, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,441 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603441"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952603441"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952603441"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952603441"}]},"ts":"1689952603441"} 2023-07-21 15:16:43,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-21 15:16:43,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; OpenRegionProcedure 4d061bc934b42acd61790a0e38a84cb5, server=jenkins-hbase17.apache.org,41557,1689952596371 in 255 msec 2023-07-21 15:16:43,448 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=61 2023-07-21 15:16:43,448 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=61, state=SUCCESS; OpenRegionProcedure 948fc64543b80de4c1db174f2ba83ade, server=jenkins-hbase17.apache.org,37121,1689952592049 in 260 msec 2023-07-21 15:16:43,448 INFO [StoreOpener-af02bfc4822d3fb75cd3910e9cc589ff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,451 DEBUG [StoreOpener-af02bfc4822d3fb75cd3910e9cc589ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/f 2023-07-21 15:16:43,451 DEBUG [StoreOpener-af02bfc4822d3fb75cd3910e9cc589ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/f 2023-07-21 15:16:43,452 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, ASSIGN in 424 msec 2023-07-21 15:16:43,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, ASSIGN in 424 msec 2023-07-21 15:16:43,452 INFO [StoreOpener-af02bfc4822d3fb75cd3910e9cc589ff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region af02bfc4822d3fb75cd3910e9cc589ff columnFamilyName f 2023-07-21 15:16:43,454 INFO [StoreOpener-af02bfc4822d3fb75cd3910e9cc589ff-1] regionserver.HStore(310): Store=af02bfc4822d3fb75cd3910e9cc589ff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:43,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:43,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:43,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened af02bfc4822d3fb75cd3910e9cc589ff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11055751520, jitterRate=0.029647096991539}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:43,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for af02bfc4822d3fb75cd3910e9cc589ff: 2023-07-21 15:16:43,472 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff., pid=66, masterSystemTime=1689952603337 2023-07-21 15:16:43,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:43,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:43,477 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=af02bfc4822d3fb75cd3910e9cc589ff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,477 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603477"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952603477"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952603477"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952603477"}]},"ts":"1689952603477"} 2023-07-21 15:16:43,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=60 2023-07-21 15:16:43,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=60, state=SUCCESS; OpenRegionProcedure af02bfc4822d3fb75cd3910e9cc589ff, server=jenkins-hbase17.apache.org,37121,1689952592049 in 292 msec 2023-07-21 15:16:43,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=58 2023-07-21 15:16:43,499 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952603499"}]},"ts":"1689952603499"} 2023-07-21 15:16:43,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, ASSIGN in 472 msec 2023-07-21 15:16:43,501 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 15:16:43,503 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 15:16:43,506 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 695 msec 2023-07-21 15:16:43,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-21 15:16:43,925 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-21 15:16:43,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:43,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:43,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:43,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:43,929 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:43,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:43,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:43,941 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952603941"}]},"ts":"1689952603941"} 2023-07-21 15:16:43,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 15:16:43,943 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 15:16:43,944 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 15:16:43,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, UNASSIGN}] 2023-07-21 15:16:43,950 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, UNASSIGN 2023-07-21 15:16:43,954 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, UNASSIGN 2023-07-21 15:16:43,954 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, UNASSIGN 2023-07-21 15:16:43,955 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, UNASSIGN 2023-07-21 15:16:43,955 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, UNASSIGN 2023-07-21 15:16:43,963 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=af02bfc4822d3fb75cd3910e9cc589ff, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=cdeccce8268f9e96444d8cb374b0c0cb, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,963 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=3f9d36c14432eb6357ba72be58fa2480, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,963 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603963"}]},"ts":"1689952603963"} 2023-07-21 15:16:43,963 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=4d061bc934b42acd61790a0e38a84cb5, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:43,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603963"}]},"ts":"1689952603963"} 2023-07-21 15:16:43,964 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603963"}]},"ts":"1689952603963"} 2023-07-21 15:16:43,964 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952603963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603963"}]},"ts":"1689952603963"} 2023-07-21 15:16:43,963 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=948fc64543b80de4c1db174f2ba83ade, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:43,964 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952603963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952603963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952603963"}]},"ts":"1689952603963"} 2023-07-21 15:16:43,969 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=71, state=RUNNABLE; CloseRegionProcedure af02bfc4822d3fb75cd3910e9cc589ff, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:43,970 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 3f9d36c14432eb6357ba72be58fa2480, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:43,971 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure 4d061bc934b42acd61790a0e38a84cb5, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:43,973 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=70, state=RUNNABLE; CloseRegionProcedure cdeccce8268f9e96444d8cb374b0c0cb, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:43,974 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=72, state=RUNNABLE; CloseRegionProcedure 948fc64543b80de4c1db174f2ba83ade, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:44,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 15:16:44,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:44,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 3f9d36c14432eb6357ba72be58fa2480, disabling compactions & flushes 2023-07-21 15:16:44,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:44,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:44,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. after waiting 0 ms 2023-07-21 15:16:44,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:44,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:44,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 4d061bc934b42acd61790a0e38a84cb5, disabling compactions & flushes 2023-07-21 15:16:44,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:44,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:44,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. after waiting 0 ms 2023-07-21 15:16:44,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:44,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:44,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480. 2023-07-21 15:16:44,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 3f9d36c14432eb6357ba72be58fa2480: 2023-07-21 15:16:44,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:44,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:44,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:44,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing af02bfc4822d3fb75cd3910e9cc589ff, disabling compactions & flushes 2023-07-21 15:16:44,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:44,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:44,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. after waiting 0 ms 2023-07-21 15:16:44,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:44,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5. 2023-07-21 15:16:44,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 4d061bc934b42acd61790a0e38a84cb5: 2023-07-21 15:16:44,138 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=3f9d36c14432eb6357ba72be58fa2480, regionState=CLOSED 2023-07-21 15:16:44,139 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952604138"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952604138"}]},"ts":"1689952604138"} 2023-07-21 15:16:44,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:44,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:44,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing cdeccce8268f9e96444d8cb374b0c0cb, disabling compactions & flushes 2023-07-21 15:16:44,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:44,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:44,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. after waiting 0 ms 2023-07-21 15:16:44,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:44,143 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=4d061bc934b42acd61790a0e38a84cb5, regionState=CLOSED 2023-07-21 15:16:44,143 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952604143"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952604143"}]},"ts":"1689952604143"} 2023-07-21 15:16:44,146 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-21 15:16:44,147 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 3f9d36c14432eb6357ba72be58fa2480, server=jenkins-hbase17.apache.org,37121,1689952592049 in 174 msec 2023-07-21 15:16:44,150 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f9d36c14432eb6357ba72be58fa2480, UNASSIGN in 202 msec 2023-07-21 15:16:44,151 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-21 15:16:44,151 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure 4d061bc934b42acd61790a0e38a84cb5, server=jenkins-hbase17.apache.org,41557,1689952596371 in 176 msec 2023-07-21 15:16:44,156 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d061bc934b42acd61790a0e38a84cb5, UNASSIGN in 206 msec 2023-07-21 15:16:44,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:44,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff. 2023-07-21 15:16:44,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for af02bfc4822d3fb75cd3910e9cc589ff: 2023-07-21 15:16:44,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:44,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:44,175 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=af02bfc4822d3fb75cd3910e9cc589ff, regionState=CLOSED 2023-07-21 15:16:44,175 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952604175"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952604175"}]},"ts":"1689952604175"} 2023-07-21 15:16:44,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 948fc64543b80de4c1db174f2ba83ade, disabling compactions & flushes 2023-07-21 15:16:44,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:44,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:44,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. after waiting 0 ms 2023-07-21 15:16:44,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:44,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:44,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb. 2023-07-21 15:16:44,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for cdeccce8268f9e96444d8cb374b0c0cb: 2023-07-21 15:16:44,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:44,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=71 2023-07-21 15:16:44,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=71, state=SUCCESS; CloseRegionProcedure af02bfc4822d3fb75cd3910e9cc589ff, server=jenkins-hbase17.apache.org,37121,1689952592049 in 214 msec 2023-07-21 15:16:44,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:44,194 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade. 2023-07-21 15:16:44,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 948fc64543b80de4c1db174f2ba83ade: 2023-07-21 15:16:44,194 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=cdeccce8268f9e96444d8cb374b0c0cb, regionState=CLOSED 2023-07-21 15:16:44,194 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689952604194"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952604194"}]},"ts":"1689952604194"} 2023-07-21 15:16:44,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:44,197 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=af02bfc4822d3fb75cd3910e9cc589ff, UNASSIGN in 247 msec 2023-07-21 15:16:44,197 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=948fc64543b80de4c1db174f2ba83ade, regionState=CLOSED 2023-07-21 15:16:44,198 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689952604197"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952604197"}]},"ts":"1689952604197"} 2023-07-21 15:16:44,199 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=70 2023-07-21 15:16:44,200 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=70, state=SUCCESS; CloseRegionProcedure cdeccce8268f9e96444d8cb374b0c0cb, server=jenkins-hbase17.apache.org,41557,1689952596371 in 223 msec 2023-07-21 15:16:44,206 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdeccce8268f9e96444d8cb374b0c0cb, UNASSIGN in 255 msec 2023-07-21 15:16:44,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=72 2023-07-21 15:16:44,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=72, state=SUCCESS; CloseRegionProcedure 948fc64543b80de4c1db174f2ba83ade, server=jenkins-hbase17.apache.org,37121,1689952592049 in 225 msec 2023-07-21 15:16:44,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-21 15:16:44,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=948fc64543b80de4c1db174f2ba83ade, UNASSIGN in 261 msec 2023-07-21 15:16:44,212 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952604212"}]},"ts":"1689952604212"} 2023-07-21 15:16:44,214 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 15:16:44,215 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 15:16:44,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 287 msec 2023-07-21 15:16:44,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-21 15:16:44,245 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-21 15:16:44,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,259 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1456384549' 2023-07-21 15:16:44,261 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:44,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:44,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-21 15:16:44,274 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:44,274 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:44,274 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:44,274 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:44,274 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:44,276 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/recovered.edits] 2023-07-21 15:16:44,276 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/recovered.edits] 2023-07-21 15:16:44,276 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/recovered.edits] 2023-07-21 15:16:44,276 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/recovered.edits] 2023-07-21 15:16:44,276 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/recovered.edits] 2023-07-21 15:16:44,283 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480/recovered.edits/4.seqid 2023-07-21 15:16:44,283 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb/recovered.edits/4.seqid 2023-07-21 15:16:44,283 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5/recovered.edits/4.seqid 2023-07-21 15:16:44,284 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f9d36c14432eb6357ba72be58fa2480 2023-07-21 15:16:44,284 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdeccce8268f9e96444d8cb374b0c0cb 2023-07-21 15:16:44,285 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d061bc934b42acd61790a0e38a84cb5 2023-07-21 15:16:44,285 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade/recovered.edits/4.seqid 2023-07-21 15:16:44,286 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff/recovered.edits/4.seqid 2023-07-21 15:16:44,286 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/948fc64543b80de4c1db174f2ba83ade 2023-07-21 15:16:44,286 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testTableMoveTruncateAndDrop/af02bfc4822d3fb75cd3910e9cc589ff 2023-07-21 15:16:44,286 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 15:16:44,289 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,296 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 15:16:44,299 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952604301"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952604301"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952604301"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952604301"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,301 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952604301"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,304 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 15:16:44,304 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cdeccce8268f9e96444d8cb374b0c0cb, NAME => 'Group_testTableMoveTruncateAndDrop,,1689952602853.cdeccce8268f9e96444d8cb374b0c0cb.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => af02bfc4822d3fb75cd3910e9cc589ff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689952602853.af02bfc4822d3fb75cd3910e9cc589ff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 948fc64543b80de4c1db174f2ba83ade, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689952602853.948fc64543b80de4c1db174f2ba83ade.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3f9d36c14432eb6357ba72be58fa2480, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689952602853.3f9d36c14432eb6357ba72be58fa2480.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 4d061bc934b42acd61790a0e38a84cb5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689952602854.4d061bc934b42acd61790a0e38a84cb5.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 15:16:44,304 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 15:16:44,304 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952604304"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:44,306 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 15:16:44,308 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 15:16:44,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 56 msec 2023-07-21 15:16:44,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-21 15:16:44,372 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-21 15:16:44,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:44,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,376 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37121] ipc.CallRunner(144): callId: 166 service: ClientService methodName: Scan size: 147 connection: 136.243.18.41:60278 deadline: 1689952664376, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=6. 2023-07-21 15:16:44,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:16:44,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:44,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:44,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1456384549, current retry=0 2023-07-21 15:16:44,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:44,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1456384549 => default 2023-07-21 15:16:44,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testTableMoveTruncateAndDrop_1456384549 2023-07-21 15:16:44,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:44,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:44,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:44,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:44,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,556 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:44,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:44,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:44,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:44,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953804574, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:44,575 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:44,578 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:44,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,580 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:44,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:44,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,614 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 421) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1088696292_17 at /127.0.0.1:55516 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1954092247-634-acceptor-0@742dfcd1-ServerConnector@4bb7d136{HTTP/1.1, (http/1.1)}{0.0.0.0:45273} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58-prefix:jenkins-hbase17.apache.org,41557,1689952596371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1954092247-633 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1088696292_17 at /127.0.0.1:42122 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1954092247-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64886@0x5562d093-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1954092247-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1886360740_17 at /127.0.0.1:55560 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1954092247-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:41557Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1954092247-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1954092247-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1954092247-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2ced9074-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1886360740_17 at /127.0.0.1:36920 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58-prefix:jenkins-hbase17.apache.org,46091,1689952592464.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:41491 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1088696292_17 at /127.0.0.1:36876 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1088696292_17 at /127.0.0.1:55362 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-330635600_17 at /127.0.0.1:58944 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:41491 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64886@0x5562d093-SendThread(127.0.0.1:64886) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64886@0x5562d093 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:41557 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:41557-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1886360740_17 at /127.0.0.1:42154 [Receiving block BP-710562131-136.243.18.41-1689952586084:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-710562131-136.243.18.41-1689952586084:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=666 (was 636) - SystemLoadAverage LEAK? -, ProcessCount=189 (was 186) - ProcessCount LEAK? -, AvailableMemoryMB=1612 (was 1790) 2023-07-21 15:16:44,615 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-21 15:16:44,633 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=666, ProcessCount=189, AvailableMemoryMB=1611 2023-07-21 15:16:44,634 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-21 15:16:44,634 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 15:16:44,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:44,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:44,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:44,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,669 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:44,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:44,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:44,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953804697, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:44,698 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:44,700 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:44,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,701 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:44,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:44,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo* 2023-07-21 15:16:44,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 136.243.18.41:53818 deadline: 1689953804706, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 15:16:44,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo@ 2023-07-21 15:16:44,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 136.243.18.41:53818 deadline: 1689953804708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 15:16:44,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup - 2023-07-21 15:16:44,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 136.243.18.41:53818 deadline: 1689953804710, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 15:16:44,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup foo_123 2023-07-21 15:16:44,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 15:16:44,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:44,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:44,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup foo_123 2023-07-21 15:16:44,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:44,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:44,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:44,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:44,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,786 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:44,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:44,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:44,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:44,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953804803, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:44,804 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:44,805 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:44,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,807 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:44,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:44,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,833 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x75c83904-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=813 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=666 (was 666), ProcessCount=186 (was 189), AvailableMemoryMB=1562 (was 1611) 2023-07-21 15:16:44,833 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-21 15:16:44,855 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=666, ProcessCount=186, AvailableMemoryMB=1561 2023-07-21 15:16:44,855 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-21 15:16:44,856 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 15:16:44,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:44,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:44,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:44,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:44,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:44,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:44,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:44,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:44,875 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:44,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:44,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:44,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:44,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:44,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953804888, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:44,889 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:44,891 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:44,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,893 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:44,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:44,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:44,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:44,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup bar 2023-07-21 15:16:44,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:44,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:44,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:44,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:44,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:44,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup bar 2023-07-21 15:16:44,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:44,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:44,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:44,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:44,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(238): Moving server region a1be046ee9a2834d581cd55948dca519, which do not belong to RSGroup bar 2023-07-21 15:16:44,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE 2023-07-21 15:16:44,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:16:44,918 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE 2023-07-21 15:16:44,920 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:44,920 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952604920"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952604920"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952604920"}]},"ts":"1689952604920"} 2023-07-21 15:16:44,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:45,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a1be046ee9a2834d581cd55948dca519, disabling compactions & flushes 2023-07-21 15:16:45,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. after waiting 0 ms 2023-07-21 15:16:45,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing a1be046ee9a2834d581cd55948dca519 1/1 column families, dataSize=5.06 KB heapSize=8.50 KB 2023-07-21 15:16:45,516 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.06 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/854e6abf381043ac8b96994fca0eb5e8 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/854e6abf381043ac8b96994fca0eb5e8, entries=9, sequenceid=32, filesize=5.5 K 2023-07-21 15:16:45,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.06 KB/5177, heapSize ~8.48 KB/8688, currentSize=0 B/0 for a1be046ee9a2834d581cd55948dca519 in 460ms, sequenceid=32, compaction requested=false 2023-07-21 15:16:45,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-21 15:16:45,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:45,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:45,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding a1be046ee9a2834d581cd55948dca519 move to jenkins-hbase17.apache.org,46091,1689952592464 record at close sequenceid=32 2023-07-21 15:16:45,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,559 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=CLOSED 2023-07-21 15:16:45,559 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952605559"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952605559"}]},"ts":"1689952605559"} 2023-07-21 15:16:45,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 15:16:45,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,43323,1689952592244 in 639 msec 2023-07-21 15:16:45,564 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:45,714 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:45,715 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952605714"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952605714"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952605714"}]},"ts":"1689952605714"} 2023-07-21 15:16:45,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:45,817 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:45,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a1be046ee9a2834d581cd55948dca519, NAME => 'hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:45,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:45,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. service=MultiRowMutationService 2023-07-21 15:16:45,875 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:45,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:45,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,880 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,885 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:45,885 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m 2023-07-21 15:16:45,886 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a1be046ee9a2834d581cd55948dca519 columnFamilyName m 2023-07-21 15:16:45,904 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/852fd3871d9d424fa62a04581adf7953 2023-07-21 15:16:45,915 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,915 DEBUG [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/854e6abf381043ac8b96994fca0eb5e8 2023-07-21 15:16:45,915 INFO [StoreOpener-a1be046ee9a2834d581cd55948dca519-1] regionserver.HStore(310): Store=a1be046ee9a2834d581cd55948dca519/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:45,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,918 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-21 15:16:45,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:16:45,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a1be046ee9a2834d581cd55948dca519; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@597b57a1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:45,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:16:45,924 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519., pid=83, masterSystemTime=1689952605869 2023-07-21 15:16:45,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,926 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:16:45,926 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a1be046ee9a2834d581cd55948dca519, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:45,927 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952605926"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952605926"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952605926"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952605926"}]},"ts":"1689952605926"} 2023-07-21 15:16:45,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-21 15:16:45,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure a1be046ee9a2834d581cd55948dca519, server=jenkins-hbase17.apache.org,46091,1689952592464 in 212 msec 2023-07-21 15:16:45,932 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a1be046ee9a2834d581cd55948dca519, REOPEN/MOVE in 1.0140 sec 2023-07-21 15:16:46,714 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:16:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371, jenkins-hbase17.apache.org,43323,1689952592244] are moved back to default 2023-07-21 15:16:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 15:16:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:46,921 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43323] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:34994 deadline: 1689952666921, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=32. 2023-07-21 15:16:47,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:47,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:47,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bar 2023-07-21 15:16:47,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:47,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:47,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:47,047 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:47,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-21 15:16:47,048 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43323] ipc.CallRunner(144): callId: 198 service: ClientService methodName: ExecService size: 532 connection: 136.243.18.41:34998 deadline: 1689952667048, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=46091 startCode=1689952592464. As of locationSeqNum=32. 2023-07-21 15:16:47,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 15:16:47,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 15:16:47,152 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:47,153 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:47,153 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:47,153 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:47,175 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:47,177 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,178 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead empty. 2023-07-21 15:16:47,179 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,179 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 15:16:47,197 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:47,199 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b9a1f08595b83f4d8e0067ac1391dead, NAME => 'Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing b9a1f08595b83f4d8e0067ac1391dead, disabling compactions & flushes 2023-07-21 15:16:47,211 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. after waiting 0 ms 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,211 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,211 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:47,214 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:47,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952607214"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952607214"}]},"ts":"1689952607214"} 2023-07-21 15:16:47,216 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:47,217 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:47,217 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952607217"}]},"ts":"1689952607217"} 2023-07-21 15:16:47,218 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 15:16:47,221 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, ASSIGN}] 2023-07-21 15:16:47,222 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, ASSIGN 2023-07-21 15:16:47,223 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:47,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 15:16:47,375 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:47,375 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952607375"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952607375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952607375"}]},"ts":"1689952607375"} 2023-07-21 15:16:47,377 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:47,534 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9a1f08595b83f4d8e0067ac1391dead, NAME => 'Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:47,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:47,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,537 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,540 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:47,540 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:47,540 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9a1f08595b83f4d8e0067ac1391dead columnFamilyName f 2023-07-21 15:16:47,541 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(310): Store=b9a1f08595b83f4d8e0067ac1391dead/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:47,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:47,551 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b9a1f08595b83f4d8e0067ac1391dead; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11089529760, jitterRate=0.032792940735816956}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:47,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:47,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead., pid=86, masterSystemTime=1689952607528 2023-07-21 15:16:47,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,560 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:47,560 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952607560"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952607560"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952607560"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952607560"}]},"ts":"1689952607560"} 2023-07-21 15:16:47,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-21 15:16:47,566 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464 in 185 msec 2023-07-21 15:16:47,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-21 15:16:47,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, ASSIGN in 345 msec 2023-07-21 15:16:47,576 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:47,577 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952607577"}]},"ts":"1689952607577"} 2023-07-21 15:16:47,585 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 15:16:47,587 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:47,590 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 543 msec 2023-07-21 15:16:47,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 15:16:47,652 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-21 15:16:47,652 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 15:16:47,652 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:47,659 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 15:16:47,660 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:47,660 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 15:16:47,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 15:16:47,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:47,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:47,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:47,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:47,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 15:16:47,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region b9a1f08595b83f4d8e0067ac1391dead to RSGroup bar 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:16:47,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:47,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE 2023-07-21 15:16:47,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 15:16:47,670 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE 2023-07-21 15:16:47,672 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:47,673 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952607672"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952607672"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952607672"}]},"ts":"1689952607672"} 2023-07-21 15:16:47,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:47,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b9a1f08595b83f4d8e0067ac1391dead, disabling compactions & flushes 2023-07-21 15:16:47,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. after waiting 0 ms 2023-07-21 15:16:47,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:47,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:47,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:47,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding b9a1f08595b83f4d8e0067ac1391dead move to jenkins-hbase17.apache.org,43323,1689952592244 record at close sequenceid=2 2023-07-21 15:16:47,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:47,844 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSED 2023-07-21 15:16:47,844 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952607844"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952607844"}]},"ts":"1689952607844"} 2023-07-21 15:16:47,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 15:16:47,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464 in 171 msec 2023-07-21 15:16:47,848 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:48,004 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:48,005 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:48,005 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952608005"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952608005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952608005"}]},"ts":"1689952608005"} 2023-07-21 15:16:48,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:48,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9a1f08595b83f4d8e0067ac1391dead, NAME => 'Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:48,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:48,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,169 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,170 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:48,170 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:48,171 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9a1f08595b83f4d8e0067ac1391dead columnFamilyName f 2023-07-21 15:16:48,171 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(310): Store=b9a1f08595b83f4d8e0067ac1391dead/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:48,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b9a1f08595b83f4d8e0067ac1391dead; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12014254080, jitterRate=0.11891460418701172}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:48,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:48,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead., pid=89, masterSystemTime=1689952608163 2023-07-21 15:16:48,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,182 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:48,183 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952608182"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952608182"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952608182"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952608182"}]},"ts":"1689952608182"} 2023-07-21 15:16:48,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-21 15:16:48,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,43323,1689952592244 in 177 msec 2023-07-21 15:16:48,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE in 519 msec 2023-07-21 15:16:48,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-21 15:16:48,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 15:16:48,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:48,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:48,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:48,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bar 2023-07-21 15:16:48,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:48,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 15:16:48,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:48,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 136.243.18.41:53818 deadline: 1689953808678, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 15:16:48,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:16:48,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:48,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 191 connection: 136.243.18.41:53818 deadline: 1689953808679, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 15:16:48,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 15:16:48,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:48,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:48,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:48,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:48,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 15:16:48,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region b9a1f08595b83f4d8e0067ac1391dead to RSGroup default 2023-07-21 15:16:48,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE 2023-07-21 15:16:48,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:16:48,689 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE 2023-07-21 15:16:48,690 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:48,690 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952608690"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952608690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952608690"}]},"ts":"1689952608690"} 2023-07-21 15:16:48,692 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:48,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b9a1f08595b83f4d8e0067ac1391dead, disabling compactions & flushes 2023-07-21 15:16:48,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. after waiting 0 ms 2023-07-21 15:16:48,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:48,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:48,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:48,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding b9a1f08595b83f4d8e0067ac1391dead move to jenkins-hbase17.apache.org,46091,1689952592464 record at close sequenceid=5 2023-07-21 15:16:48,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:48,857 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSED 2023-07-21 15:16:48,858 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952608857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952608857"}]},"ts":"1689952608857"} 2023-07-21 15:16:48,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-21 15:16:48,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,43323,1689952592244 in 167 msec 2023-07-21 15:16:48,862 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:49,013 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:49,013 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952609012"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952609012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952609012"}]},"ts":"1689952609012"} 2023-07-21 15:16:49,015 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:49,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9a1f08595b83f4d8e0067ac1391dead, NAME => 'Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,176 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,178 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:49,178 DEBUG [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f 2023-07-21 15:16:49,179 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9a1f08595b83f4d8e0067ac1391dead columnFamilyName f 2023-07-21 15:16:49,179 INFO [StoreOpener-b9a1f08595b83f4d8e0067ac1391dead-1] regionserver.HStore(310): Store=b9a1f08595b83f4d8e0067ac1391dead/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:49,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b9a1f08595b83f4d8e0067ac1391dead; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10023982560, jitterRate=-0.06644387543201447}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:49,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:49,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead., pid=92, masterSystemTime=1689952609167 2023-07-21 15:16:49,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,189 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:49,189 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952609189"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952609189"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952609189"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952609189"}]},"ts":"1689952609189"} 2023-07-21 15:16:49,209 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-21 15:16:49,210 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464 in 176 msec 2023-07-21 15:16:49,212 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, REOPEN/MOVE in 522 msec 2023-07-21 15:16:49,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-21 15:16:49,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 15:16:49,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:49,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:49,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:49,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 15:16:49,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:49,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 136.243.18.41:53818 deadline: 1689953809696, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 15:16:49,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:16:49,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:49,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 15:16:49,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:49,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:49,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 15:16:49,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371, jenkins-hbase17.apache.org,43323,1689952592244] are moved back to bar 2023-07-21 15:16:49,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 15:16:49,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:49,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:49,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:49,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bar 2023-07-21 15:16:49,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:49,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:49,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:49,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:49,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:49,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:49,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:49,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:49,729 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 15:16:49,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testFailRemoveGroup 2023-07-21 15:16:49,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:49,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 15:16:49,734 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952609734"}]},"ts":"1689952609734"} 2023-07-21 15:16:49,735 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 15:16:49,736 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 15:16:49,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, UNASSIGN}] 2023-07-21 15:16:49,739 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, UNASSIGN 2023-07-21 15:16:49,739 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:49,740 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952609739"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952609739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952609739"}]},"ts":"1689952609739"} 2023-07-21 15:16:49,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:49,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 15:16:49,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b9a1f08595b83f4d8e0067ac1391dead, disabling compactions & flushes 2023-07-21 15:16:49,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. after waiting 0 ms 2023-07-21 15:16:49,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 15:16:49,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead. 2023-07-21 15:16:49,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b9a1f08595b83f4d8e0067ac1391dead: 2023-07-21 15:16:49,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:49,904 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=b9a1f08595b83f4d8e0067ac1391dead, regionState=CLOSED 2023-07-21 15:16:49,904 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952609904"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952609904"}]},"ts":"1689952609904"} 2023-07-21 15:16:49,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 15:16:49,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure b9a1f08595b83f4d8e0067ac1391dead, server=jenkins-hbase17.apache.org,46091,1689952592464 in 165 msec 2023-07-21 15:16:49,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-21 15:16:49,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b9a1f08595b83f4d8e0067ac1391dead, UNASSIGN in 171 msec 2023-07-21 15:16:49,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952609909"}]},"ts":"1689952609909"} 2023-07-21 15:16:49,911 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 15:16:49,912 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 15:16:49,914 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 183 msec 2023-07-21 15:16:50,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 15:16:50,036 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-21 15:16:50,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testFailRemoveGroup 2023-07-21 15:16:50,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,041 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 15:16:50,043 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,047 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:50,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:50,049 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits] 2023-07-21 15:16:50,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-21 15:16:50,055 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/10.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead/recovered.edits/10.seqid 2023-07-21 15:16:50,055 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testFailRemoveGroup/b9a1f08595b83f4d8e0067ac1391dead 2023-07-21 15:16:50,055 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 15:16:50,057 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,059 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 15:16:50,061 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 15:16:50,062 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,062 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 15:16:50,062 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952610062"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:50,064 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:16:50,064 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b9a1f08595b83f4d8e0067ac1391dead, NAME => 'Group_testFailRemoveGroup,,1689952607044.b9a1f08595b83f4d8e0067ac1391dead.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:16:50,064 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 15:16:50,064 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952610064"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:50,066 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 15:16:50,067 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 15:16:50,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 30 msec 2023-07-21 15:16:50,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-21 15:16:50,152 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-21 15:16:50,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:50,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:50,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:50,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:50,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:50,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:50,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:50,181 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:50,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:50,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:50,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:50,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:50,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:50,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953810193, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:50,194 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:50,195 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:50,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,196 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:50,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:50,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:50,215 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512 (was 507) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-348314727_17 at /127.0.0.1:33150 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1886360740_17 at /127.0.0.1:46056 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a0ffa86-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 813), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=660 (was 666), ProcessCount=186 (was 186), AvailableMemoryMB=1380 (was 1561) 2023-07-21 15:16:50,215 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:16:50,233 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=660, ProcessCount=186, AvailableMemoryMB=1380 2023-07-21 15:16:50,233 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:16:50,236 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 15:16:50,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:50,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:50,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:50,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:50,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:50,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:50,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:50,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:50,251 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:50,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:50,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:50,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:50,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:50,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:50,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953810262, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:50,263 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:50,266 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:50,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,268 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:50,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:50,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:50,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:50,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:50,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:50,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:50,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121] to rsgroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:50,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:50,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049] are moved back to default 2023-07-21 15:16:50,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:50,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:50,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:50,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:50,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:50,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:50,434 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:50,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-21 15:16:50,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:16:50,437 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:50,438 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:50,438 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:50,439 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:50,443 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:50,445 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,446 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e empty. 2023-07-21 15:16:50,447 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,447 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 15:16:50,495 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:50,497 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1bc308e1915c8bec4aa2a365151ace1e, NAME => 'GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:50,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:16:50,567 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:50,567 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 1bc308e1915c8bec4aa2a365151ace1e, disabling compactions & flushes 2023-07-21 15:16:50,567 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,567 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,568 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. after waiting 0 ms 2023-07-21 15:16:50,568 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,568 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,568 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 1bc308e1915c8bec4aa2a365151ace1e: 2023-07-21 15:16:50,571 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:50,572 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952610572"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952610572"}]},"ts":"1689952610572"} 2023-07-21 15:16:50,574 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:50,575 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:50,576 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952610576"}]},"ts":"1689952610576"} 2023-07-21 15:16:50,578 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 15:16:50,581 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:50,581 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:50,581 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:50,581 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:50,581 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:50,582 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, ASSIGN}] 2023-07-21 15:16:50,585 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, ASSIGN 2023-07-21 15:16:50,587 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:50,737 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:50,739 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:50,739 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952610739"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952610739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952610739"}]},"ts":"1689952610739"} 2023-07-21 15:16:50,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:50,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:16:50,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1bc308e1915c8bec4aa2a365151ace1e, NAME => 'GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:50,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:50,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,905 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,911 DEBUG [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/f 2023-07-21 15:16:50,912 DEBUG [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/f 2023-07-21 15:16:50,912 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1bc308e1915c8bec4aa2a365151ace1e columnFamilyName f 2023-07-21 15:16:50,913 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] regionserver.HStore(310): Store=1bc308e1915c8bec4aa2a365151ace1e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:50,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:50,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:50,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1bc308e1915c8bec4aa2a365151ace1e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10932091360, jitterRate=0.0181303471326828}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:50,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1bc308e1915c8bec4aa2a365151ace1e: 2023-07-21 15:16:50,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e., pid=99, masterSystemTime=1689952610896 2023-07-21 15:16:50,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:50,949 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:50,950 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952610949"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952610949"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952610949"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952610949"}]},"ts":"1689952610949"} 2023-07-21 15:16:50,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 15:16:50,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,46091,1689952592464 in 211 msec 2023-07-21 15:16:50,958 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 15:16:50,958 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, ASSIGN in 374 msec 2023-07-21 15:16:50,958 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:50,959 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952610959"}]},"ts":"1689952610959"} 2023-07-21 15:16:50,960 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 15:16:50,963 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:50,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 532 msec 2023-07-21 15:16:51,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:16:51,048 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-21 15:16:51,048 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 15:16:51,048 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:51,056 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:51,056 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 15:16:51,056 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:51,056 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 15:16:51,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:51,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:51,070 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:51,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-21 15:16:51,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 15:16:51,076 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:51,077 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,078 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:51,078 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:51,083 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:51,085 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,087 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e empty. 2023-07-21 15:16:51,087 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,088 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 15:16:51,142 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:51,143 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => e1fde3599f026162c621b8379d86a43e, NAME => 'GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:51,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing e1fde3599f026162c621b8379d86a43e, disabling compactions & flushes 2023-07-21 15:16:51,196 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. after waiting 0 ms 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,196 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,196 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for e1fde3599f026162c621b8379d86a43e: 2023-07-21 15:16:51,200 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:51,201 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611201"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952611201"}]},"ts":"1689952611201"} 2023-07-21 15:16:51,203 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:51,205 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:51,205 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952611205"}]},"ts":"1689952611205"} 2023-07-21 15:16:51,207 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 15:16:51,212 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:51,212 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:51,212 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:51,212 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:51,212 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:51,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, ASSIGN}] 2023-07-21 15:16:51,217 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, ASSIGN 2023-07-21 15:16:51,219 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:51,369 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:51,371 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:51,371 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611371"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952611371"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952611371"}]},"ts":"1689952611371"} 2023-07-21 15:16:51,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:51,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 15:16:51,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1fde3599f026162c621b8379d86a43e, NAME => 'GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:51,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:51,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,540 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,542 DEBUG [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/f 2023-07-21 15:16:51,543 DEBUG [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/f 2023-07-21 15:16:51,543 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1fde3599f026162c621b8379d86a43e columnFamilyName f 2023-07-21 15:16:51,544 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] regionserver.HStore(310): Store=e1fde3599f026162c621b8379d86a43e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:51,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:51,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened e1fde3599f026162c621b8379d86a43e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9617551680, jitterRate=-0.10429570078849792}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:51,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for e1fde3599f026162c621b8379d86a43e: 2023-07-21 15:16:51,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e., pid=102, masterSystemTime=1689952611531 2023-07-21 15:16:51,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,557 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:51,557 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611557"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952611557"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952611557"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952611557"}]},"ts":"1689952611557"} 2023-07-21 15:16:51,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-21 15:16:51,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,41557,1689952596371 in 185 msec 2023-07-21 15:16:51,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-21 15:16:51,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, ASSIGN in 347 msec 2023-07-21 15:16:51,563 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:51,563 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952611563"}]},"ts":"1689952611563"} 2023-07-21 15:16:51,564 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 15:16:51,567 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:51,568 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 509 msec 2023-07-21 15:16:51,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 15:16:51,677 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-21 15:16:51,677 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 15:16:51,677 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:51,682 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 15:16:51,683 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:51,683 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 15:16:51,684 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:51,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 15:16:51,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:51,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 15:16:51,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:51,696 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:51,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:51,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:51,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region e1fde3599f026162c621b8379d86a43e to RSGroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, REOPEN/MOVE 2023-07-21 15:16:51,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 1bc308e1915c8bec4aa2a365151ace1e to RSGroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:51,710 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, REOPEN/MOVE 2023-07-21 15:16:51,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, REOPEN/MOVE 2023-07-21 15:16:51,711 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:51,714 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, REOPEN/MOVE 2023-07-21 15:16:51,714 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611711"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952611711"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952611711"}]},"ts":"1689952611711"} 2023-07-21 15:16:51,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_2093223013, current retry=0 2023-07-21 15:16:51,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:51,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611715"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952611715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952611715"}]},"ts":"1689952611715"} 2023-07-21 15:16:51,716 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:51,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:51,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing e1fde3599f026162c621b8379d86a43e, disabling compactions & flushes 2023-07-21 15:16:51,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. after waiting 0 ms 2023-07-21 15:16:51,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:51,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bc308e1915c8bec4aa2a365151ace1e, disabling compactions & flushes 2023-07-21 15:16:51,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:51,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:51,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. after waiting 0 ms 2023-07-21 15:16:51,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:51,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:51,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:51,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:51,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bc308e1915c8bec4aa2a365151ace1e: 2023-07-21 15:16:51,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1bc308e1915c8bec4aa2a365151ace1e move to jenkins-hbase17.apache.org,37121,1689952592049 record at close sequenceid=2 2023-07-21 15:16:51,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:51,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for e1fde3599f026162c621b8379d86a43e: 2023-07-21 15:16:51,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding e1fde3599f026162c621b8379d86a43e move to jenkins-hbase17.apache.org,37121,1689952592049 record at close sequenceid=2 2023-07-21 15:16:51,909 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=CLOSED 2023-07-21 15:16:51,909 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952611909"}]},"ts":"1689952611909"} 2023-07-21 15:16:51,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:51,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:51,910 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=CLOSED 2023-07-21 15:16:51,910 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952611910"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952611910"}]},"ts":"1689952611910"} 2023-07-21 15:16:51,918 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-21 15:16:51,918 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,46091,1689952592464 in 197 msec 2023-07-21 15:16:51,919 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-21 15:16:51,919 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:51,919 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,41557,1689952596371 in 199 msec 2023-07-21 15:16:51,920 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,37121,1689952592049; forceNewPlan=false, retain=false 2023-07-21 15:16:52,070 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:52,070 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:52,070 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612070"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952612070"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952612070"}]},"ts":"1689952612070"} 2023-07-21 15:16:52,070 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612070"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952612070"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952612070"}]},"ts":"1689952612070"} 2023-07-21 15:16:52,072 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:52,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:52,230 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1bc308e1915c8bec4aa2a365151ace1e, NAME => 'GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:52,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:52,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,233 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,234 DEBUG [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/f 2023-07-21 15:16:52,234 DEBUG [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/f 2023-07-21 15:16:52,235 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1bc308e1915c8bec4aa2a365151ace1e columnFamilyName f 2023-07-21 15:16:52,236 INFO [StoreOpener-1bc308e1915c8bec4aa2a365151ace1e-1] regionserver.HStore(310): Store=1bc308e1915c8bec4aa2a365151ace1e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:52,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,243 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1bc308e1915c8bec4aa2a365151ace1e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10436822400, jitterRate=-0.027995169162750244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:52,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1bc308e1915c8bec4aa2a365151ace1e: 2023-07-21 15:16:52,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e., pid=108, masterSystemTime=1689952612225 2023-07-21 15:16:52,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:52,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1fde3599f026162c621b8379d86a43e, NAME => 'GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:52,247 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:52,247 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612247"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952612247"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952612247"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952612247"}]},"ts":"1689952612247"} 2023-07-21 15:16:52,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:52,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,250 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,252 DEBUG [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/f 2023-07-21 15:16:52,252 DEBUG [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/f 2023-07-21 15:16:52,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-21 15:16:52,253 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1fde3599f026162c621b8379d86a43e columnFamilyName f 2023-07-21 15:16:52,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,37121,1689952592049 in 177 msec 2023-07-21 15:16:52,254 INFO [StoreOpener-e1fde3599f026162c621b8379d86a43e-1] regionserver.HStore(310): Store=e1fde3599f026162c621b8379d86a43e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:52,255 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, REOPEN/MOVE in 543 msec 2023-07-21 15:16:52,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:52,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened e1fde3599f026162c621b8379d86a43e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10851883360, jitterRate=0.01066039502620697}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:52,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for e1fde3599f026162c621b8379d86a43e: 2023-07-21 15:16:52,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e., pid=107, masterSystemTime=1689952612225 2023-07-21 15:16:52,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:52,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:52,271 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:52,271 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612271"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952612271"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952612271"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952612271"}]},"ts":"1689952612271"} 2023-07-21 15:16:52,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-21 15:16:52,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,37121,1689952592049 in 201 msec 2023-07-21 15:16:52,277 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, REOPEN/MOVE in 568 msec 2023-07-21 15:16:52,712 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-21 15:16:52,712 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-21 15:16:52,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-21 15:16:52,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_2093223013. 2023-07-21 15:16:52,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:52,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:52,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:52,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 15:16:52,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:52,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 15:16:52,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:52,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:52,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:52,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2093223013 2023-07-21 15:16:52,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:52,727 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 15:16:52,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable GrouptestMultiTableMoveA 2023-07-21 15:16:52,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:52,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 15:16:52,737 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952612737"}]},"ts":"1689952612737"} 2023-07-21 15:16:52,740 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 15:16:52,742 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 15:16:52,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, UNASSIGN}] 2023-07-21 15:16:52,744 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, UNASSIGN 2023-07-21 15:16:52,745 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:52,746 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612745"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952612745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952612745"}]},"ts":"1689952612745"} 2023-07-21 15:16:52,747 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:52,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 15:16:52,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bc308e1915c8bec4aa2a365151ace1e, disabling compactions & flushes 2023-07-21 15:16:52,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. after waiting 0 ms 2023-07-21 15:16:52,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:52,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e. 2023-07-21 15:16:52,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bc308e1915c8bec4aa2a365151ace1e: 2023-07-21 15:16:52,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:52,934 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=1bc308e1915c8bec4aa2a365151ace1e, regionState=CLOSED 2023-07-21 15:16:52,934 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952612934"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952612934"}]},"ts":"1689952612934"} 2023-07-21 15:16:52,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-21 15:16:52,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 1bc308e1915c8bec4aa2a365151ace1e, server=jenkins-hbase17.apache.org,37121,1689952592049 in 189 msec 2023-07-21 15:16:52,940 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-21 15:16:52,940 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1bc308e1915c8bec4aa2a365151ace1e, UNASSIGN in 197 msec 2023-07-21 15:16:52,942 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952612942"}]},"ts":"1689952612942"} 2023-07-21 15:16:52,944 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 15:16:52,946 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 15:16:52,949 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 219 msec 2023-07-21 15:16:53,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 15:16:53,042 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-21 15:16:53,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete GrouptestMultiTableMoveA 2023-07-21 15:16:53,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,046 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_2093223013' 2023-07-21 15:16:53,047 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:53,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-21 15:16:53,057 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:53,059 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits] 2023-07-21 15:16:53,067 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e/recovered.edits/7.seqid 2023-07-21 15:16:53,068 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveA/1bc308e1915c8bec4aa2a365151ace1e 2023-07-21 15:16:53,068 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 15:16:53,072 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,076 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 15:16:53,089 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 15:16:53,101 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,101 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 15:16:53,101 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952613101"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:53,105 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:16:53,105 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1bc308e1915c8bec4aa2a365151ace1e, NAME => 'GrouptestMultiTableMoveA,,1689952610430.1bc308e1915c8bec4aa2a365151ace1e.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:16:53,105 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 15:16:53,105 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952613105"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:53,107 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 15:16:53,113 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 15:16:53,116 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 70 msec 2023-07-21 15:16:53,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-21 15:16:53,155 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-21 15:16:53,156 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 15:16:53,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable GrouptestMultiTableMoveB 2023-07-21 15:16:53,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 15:16:53,160 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952613160"}]},"ts":"1689952613160"} 2023-07-21 15:16:53,162 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 15:16:53,163 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 15:16:53,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, UNASSIGN}] 2023-07-21 15:16:53,166 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, UNASSIGN 2023-07-21 15:16:53,167 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:16:53,167 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952613167"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952613167"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952613167"}]},"ts":"1689952613167"} 2023-07-21 15:16:53,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,37121,1689952592049}] 2023-07-21 15:16:53,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 15:16:53,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:53,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing e1fde3599f026162c621b8379d86a43e, disabling compactions & flushes 2023-07-21 15:16:53,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:53,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:53,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. after waiting 0 ms 2023-07-21 15:16:53,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:53,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:53,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e. 2023-07-21 15:16:53,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for e1fde3599f026162c621b8379d86a43e: 2023-07-21 15:16:53,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:53,330 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=e1fde3599f026162c621b8379d86a43e, regionState=CLOSED 2023-07-21 15:16:53,330 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689952613330"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952613330"}]},"ts":"1689952613330"} 2023-07-21 15:16:53,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 15:16:53,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure e1fde3599f026162c621b8379d86a43e, server=jenkins-hbase17.apache.org,37121,1689952592049 in 163 msec 2023-07-21 15:16:53,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-21 15:16:53,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=e1fde3599f026162c621b8379d86a43e, UNASSIGN in 169 msec 2023-07-21 15:16:53,335 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952613335"}]},"ts":"1689952613335"} 2023-07-21 15:16:53,341 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 15:16:53,343 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 15:16:53,354 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 189 msec 2023-07-21 15:16:53,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 15:16:53,463 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-21 15:16:53,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete GrouptestMultiTableMoveB 2023-07-21 15:16:53,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,467 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_2093223013' 2023-07-21 15:16:53,469 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,482 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:53,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:53,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,491 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits] 2023-07-21 15:16:53,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-21 15:16:53,496 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits/7.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e/recovered.edits/7.seqid 2023-07-21 15:16:53,497 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/GrouptestMultiTableMoveB/e1fde3599f026162c621b8379d86a43e 2023-07-21 15:16:53,497 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 15:16:53,502 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,504 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 15:16:53,506 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 15:16:53,507 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,508 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 15:16:53,508 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952613508"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:53,510 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:16:53,511 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e1fde3599f026162c621b8379d86a43e, NAME => 'GrouptestMultiTableMoveB,,1689952611058.e1fde3599f026162c621b8379d86a43e.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:16:53,511 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 15:16:53,511 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952613511"}]},"ts":"9223372036854775807"} 2023-07-21 15:16:53,512 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 15:16:53,514 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 15:16:53,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 52 msec 2023-07-21 15:16:53,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-21 15:16:53,593 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-21 15:16:53,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121] to rsgroup default 2023-07-21 15:16:53,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2093223013 2023-07-21 15:16:53,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_2093223013, current retry=0 2023-07-21 15:16:53,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049] are moved back to Group_testMultiTableMove_2093223013 2023-07-21 15:16:53,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_2093223013 => default 2023-07-21 15:16:53,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testMultiTableMove_2093223013 2023-07-21 15:16:53,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:53,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:53,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:53,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:53,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,640 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:53,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:53,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:53,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:53,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:53,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953813660, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:53,661 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:53,663 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:53,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,665 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:53,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:53,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,697 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511 (was 512), OpenFileDescriptor=783 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=623 (was 660), ProcessCount=184 (was 186), AvailableMemoryMB=3394 (was 1380) - AvailableMemoryMB LEAK? - 2023-07-21 15:16:53,697 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-21 15:16:53,723 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=511, OpenFileDescriptor=783, MaxFileDescriptor=60000, SystemLoadAverage=623, ProcessCount=184, AvailableMemoryMB=3393 2023-07-21 15:16:53,723 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-21 15:16:53,723 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 15:16:53,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:53,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:53,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:53,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,746 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:53,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:53,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:53,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:53,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:53,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953813763, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:53,764 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:53,766 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:53,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,767 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:53,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:53,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:53,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup oldGroup 2023-07-21 15:16:53,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:53,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup oldGroup 2023-07-21 15:16:53,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:53,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to default 2023-07-21 15:16:53,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 15:16:53,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 15:16:53,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 15:16:53,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:53,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup anotherRSGroup 2023-07-21 15:16:53,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 15:16:53,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:53,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:53,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323] to rsgroup anotherRSGroup 2023-07-21 15:16:53,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 15:16:53,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:53,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:53,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,43323,1689952592244] are moved back to default 2023-07-21 15:16:53,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 15:16:53,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 15:16:53,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 15:16:53,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:53,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 15:16:53,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 136.243.18.41:53818 deadline: 1689953813861, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 15:16:53,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 15:16:53,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 136.243.18.41:53818 deadline: 1689953813865, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 15:16:53,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from default to newRSGroup2 2023-07-21 15:16:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 136.243.18.41:53818 deadline: 1689953813869, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 15:16:53,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldGroup to default 2023-07-21 15:16:53,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:53,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 136.243.18.41:53818 deadline: 1689953813870, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 15:16:53,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:53,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:53,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323] to rsgroup default 2023-07-21 15:16:53,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 15:16:53,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:53,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 15:16:53,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,43323,1689952592244] are moved back to anotherRSGroup 2023-07-21 15:16:53,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 15:16:53,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup anotherRSGroup 2023-07-21 15:16:53,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 15:16:53,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:16:53,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 15:16:53,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:53,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 15:16:53,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to oldGroup 2023-07-21 15:16:53,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 15:16:53,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup oldGroup 2023-07-21 15:16:53,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:53,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:53,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:53,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:53,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:53,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:53,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:53,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:53,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:53,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:53,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:53,997 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:53,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:54,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:54,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:54,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:54,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:54,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953814010, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:54,010 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:54,012 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:54,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,014 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:54,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:54,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:54,040 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515 (was 511) Potentially hanging thread: hconnection-0x75c83904-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=783 (was 783), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=623 (was 623), ProcessCount=184 (was 184), AvailableMemoryMB=3376 (was 3393) 2023-07-21 15:16:54,040 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 15:16:54,083 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=515, OpenFileDescriptor=783, MaxFileDescriptor=60000, SystemLoadAverage=623, ProcessCount=184, AvailableMemoryMB=3374 2023-07-21 15:16:54,083 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 15:16:54,083 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 15:16:54,100 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,100 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,104 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:54,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:54,104 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:54,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:54,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:54,107 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:54,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:54,116 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:54,121 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:54,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:54,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:54,128 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:54,138 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,138 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:54,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:54,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953814142, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:54,144 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:54,152 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:54,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,155 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:54,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:54,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:54,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:54,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:54,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup oldgroup 2023-07-21 15:16:54,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:54,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:54,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:54,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup oldgroup 2023-07-21 15:16:54,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:54,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to default 2023-07-21 15:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 15:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:54,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:54,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:54,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 15:16:54,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:54,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:54,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 15:16:54,187 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:54,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-21 15:16:54,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 15:16:54,189 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:54,190 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,190 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,190 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:54,192 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:54,194 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,195 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e empty. 2023-07-21 15:16:54,195 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,195 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 15:16:54,209 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:54,210 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ad7c7c13a1d346732619829706d4f9e, NAME => 'testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 0ad7c7c13a1d346732619829706d4f9e, disabling compactions & flushes 2023-07-21 15:16:54,225 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. after waiting 0 ms 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,225 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,225 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:54,227 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:54,228 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952614228"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952614228"}]},"ts":"1689952614228"} 2023-07-21 15:16:54,230 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:54,230 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:54,231 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952614231"}]},"ts":"1689952614231"} 2023-07-21 15:16:54,232 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 15:16:54,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:54,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:54,235 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:54,235 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:54,235 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, ASSIGN}] 2023-07-21 15:16:54,237 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, ASSIGN 2023-07-21 15:16:54,238 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:54,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 15:16:54,388 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:54,389 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:54,390 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952614389"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952614389"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952614389"}]},"ts":"1689952614389"} 2023-07-21 15:16:54,391 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:54,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 15:16:54,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ad7c7c13a1d346732619829706d4f9e, NAME => 'testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:54,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:54,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,548 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,550 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:54,550 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:54,550 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ad7c7c13a1d346732619829706d4f9e columnFamilyName tr 2023-07-21 15:16:54,551 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(310): Store=0ad7c7c13a1d346732619829706d4f9e/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:54,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:54,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0ad7c7c13a1d346732619829706d4f9e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11199091680, jitterRate=0.042996689677238464}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:54,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:54,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e., pid=119, masterSystemTime=1689952614543 2023-07-21 15:16:54,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,561 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:54,561 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952614560"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952614560"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952614560"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952614560"}]},"ts":"1689952614560"} 2023-07-21 15:16:54,564 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-21 15:16:54,564 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244 in 171 msec 2023-07-21 15:16:54,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 15:16:54,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, ASSIGN in 329 msec 2023-07-21 15:16:54,568 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:54,568 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952614568"}]},"ts":"1689952614568"} 2023-07-21 15:16:54,570 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 15:16:54,573 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:54,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 389 msec 2023-07-21 15:16:54,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-21 15:16:54,792 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-21 15:16:54,792 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 15:16:54,793 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:54,798 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 15:16:54,798 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:54,798 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 15:16:54,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [testRename] to rsgroup oldgroup 2023-07-21 15:16:54,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:54,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:54,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:54,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:54,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 15:16:54,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 0ad7c7c13a1d346732619829706d4f9e to RSGroup oldgroup 2023-07-21 15:16:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:54,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE 2023-07-21 15:16:54,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 15:16:54,809 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE 2023-07-21 15:16:54,810 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:54,810 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952614810"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952614810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952614810"}]},"ts":"1689952614810"} 2023-07-21 15:16:54,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:54,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0ad7c7c13a1d346732619829706d4f9e, disabling compactions & flushes 2023-07-21 15:16:54,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. after waiting 0 ms 2023-07-21 15:16:54,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:54,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:54,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:54,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 0ad7c7c13a1d346732619829706d4f9e move to jenkins-hbase17.apache.org,41557,1689952596371 record at close sequenceid=2 2023-07-21 15:16:54,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:54,976 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=CLOSED 2023-07-21 15:16:54,976 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952614976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952614976"}]},"ts":"1689952614976"} 2023-07-21 15:16:54,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 15:16:54,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244 in 167 msec 2023-07-21 15:16:54,983 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,41557,1689952596371; forceNewPlan=false, retain=false 2023-07-21 15:16:55,133 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:55,134 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:55,134 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952615133"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952615133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952615133"}]},"ts":"1689952615133"} 2023-07-21 15:16:55,136 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:55,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:55,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ad7c7c13a1d346732619829706d4f9e, NAME => 'testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:55,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:55,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,308 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,310 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:55,310 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:55,310 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ad7c7c13a1d346732619829706d4f9e columnFamilyName tr 2023-07-21 15:16:55,311 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(310): Store=0ad7c7c13a1d346732619829706d4f9e/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:55,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:55,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0ad7c7c13a1d346732619829706d4f9e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9614301440, jitterRate=-0.10459840297698975}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:55,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:55,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e., pid=122, masterSystemTime=1689952615295 2023-07-21 15:16:55,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:55,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:55,321 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:55,321 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952615321"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952615321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952615321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952615321"}]},"ts":"1689952615321"} 2023-07-21 15:16:55,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-21 15:16:55,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,41557,1689952596371 in 187 msec 2023-07-21 15:16:55,329 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE in 518 msec 2023-07-21 15:16:55,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-21 15:16:55,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 15:16:55,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:55,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:55,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:55,816 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:55,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 15:16:55,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:55,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 15:16:55,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:55,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 15:16:55,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:55,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:55,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:55,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup normal 2023-07-21 15:16:55,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:55,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:55,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:55,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:55,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:55,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:55,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:55,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:55,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323] to rsgroup normal 2023-07-21 15:16:55,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:55,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:55,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:55,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:55,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:55,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:55,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,43323,1689952592244] are moved back to default 2023-07-21 15:16:55,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 15:16:55,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:55,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:55,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:55,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=normal 2023-07-21 15:16:55,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:55,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:55,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 15:16:55,849 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:55,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-21 15:16:55,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 15:16:55,852 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:55,853 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:55,853 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:55,853 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:55,854 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:55,856 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:55,858 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:55,859 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 empty. 2023-07-21 15:16:55,859 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:55,860 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 15:16:55,883 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:55,884 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 36a5507710a8db368e3f50132ff98e27, NAME => 'unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 36a5507710a8db368e3f50132ff98e27, disabling compactions & flushes 2023-07-21 15:16:55,925 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. after waiting 0 ms 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:55,925 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:55,925 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:55,928 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:55,931 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952615930"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952615930"}]},"ts":"1689952615930"} 2023-07-21 15:16:55,932 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:55,933 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:55,934 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952615933"}]},"ts":"1689952615933"} 2023-07-21 15:16:55,935 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 15:16:55,937 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, ASSIGN}] 2023-07-21 15:16:55,939 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, ASSIGN 2023-07-21 15:16:55,940 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:55,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 15:16:56,091 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:56,092 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952616091"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952616091"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952616091"}]},"ts":"1689952616091"} 2023-07-21 15:16:56,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:56,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 15:16:56,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36a5507710a8db368e3f50132ff98e27, NAME => 'unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:56,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:56,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,252 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,253 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:56,253 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:56,254 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36a5507710a8db368e3f50132ff98e27 columnFamilyName ut 2023-07-21 15:16:56,254 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(310): Store=36a5507710a8db368e3f50132ff98e27/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:56,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:56,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 36a5507710a8db368e3f50132ff98e27; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11153974560, jitterRate=0.038794830441474915}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:56,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:56,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27., pid=125, masterSystemTime=1689952616246 2023-07-21 15:16:56,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,264 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:56,264 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952616264"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952616264"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952616264"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952616264"}]},"ts":"1689952616264"} 2023-07-21 15:16:56,268 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-21 15:16:56,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464 in 172 msec 2023-07-21 15:16:56,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 15:16:56,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, ASSIGN in 332 msec 2023-07-21 15:16:56,274 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:56,274 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952616274"}]},"ts":"1689952616274"} 2023-07-21 15:16:56,276 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 15:16:56,279 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:56,281 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 433 msec 2023-07-21 15:16:56,404 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:56,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 15:16:56,455 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-21 15:16:56,456 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 15:16:56,456 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:56,462 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 15:16:56,462 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:56,462 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 15:16:56,465 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [unmovedTable] to rsgroup normal 2023-07-21 15:16:56,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 15:16:56,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:56,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:56,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:56,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:56,470 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 15:16:56,470 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 36a5507710a8db368e3f50132ff98e27 to RSGroup normal 2023-07-21 15:16:56,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE 2023-07-21 15:16:56,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 15:16:56,471 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE 2023-07-21 15:16:56,472 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:56,472 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952616472"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952616472"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952616472"}]},"ts":"1689952616472"} 2023-07-21 15:16:56,473 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:56,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 36a5507710a8db368e3f50132ff98e27, disabling compactions & flushes 2023-07-21 15:16:56,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. after waiting 0 ms 2023-07-21 15:16:56,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:56,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:56,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:56,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 36a5507710a8db368e3f50132ff98e27 move to jenkins-hbase17.apache.org,43323,1689952592244 record at close sequenceid=2 2023-07-21 15:16:56,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:56,657 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=CLOSED 2023-07-21 15:16:56,657 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952616657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952616657"}]},"ts":"1689952616657"} 2023-07-21 15:16:56,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-21 15:16:56,678 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464 in 195 msec 2023-07-21 15:16:56,679 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:56,829 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:56,830 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952616829"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952616829"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952616829"}]},"ts":"1689952616829"} 2023-07-21 15:16:56,835 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:56,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36a5507710a8db368e3f50132ff98e27, NAME => 'unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,002 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,003 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:57,003 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:57,004 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36a5507710a8db368e3f50132ff98e27 columnFamilyName ut 2023-07-21 15:16:57,005 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(310): Store=36a5507710a8db368e3f50132ff98e27/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:57,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 36a5507710a8db368e3f50132ff98e27; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11184773920, jitterRate=0.04166324436664581}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:57,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:57,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27., pid=128, masterSystemTime=1689952616995 2023-07-21 15:16:57,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,015 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:57,015 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952617015"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952617015"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952617015"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952617015"}]},"ts":"1689952617015"} 2023-07-21 15:16:57,018 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 15:16:57,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,43323,1689952592244 in 181 msec 2023-07-21 15:16:57,020 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE in 549 msec 2023-07-21 15:16:57,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-21 15:16:57,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 15:16:57,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:57,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:57,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:57,477 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:57,479 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 15:16:57,479 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:57,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=normal 2023-07-21 15:16:57,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:57,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 15:16:57,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:57,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//136.243.18.41 rename rsgroup from oldgroup to newgroup 2023-07-21 15:16:57,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:57,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:57,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 15:16:57,578 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 15:16:57,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:57,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:57,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=newgroup 2023-07-21 15:16:57,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:57,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=testRename 2023-07-21 15:16:57,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:57,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 15:16:57,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:57,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:57,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:57,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [unmovedTable] to rsgroup default 2023-07-21 15:16:57,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:57,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:57,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:57,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:57,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:57,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 15:16:57,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 36a5507710a8db368e3f50132ff98e27 to RSGroup default 2023-07-21 15:16:57,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE 2023-07-21 15:16:57,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:16:57,600 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE 2023-07-21 15:16:57,601 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:57,601 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952617601"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952617601"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952617601"}]},"ts":"1689952617601"} 2023-07-21 15:16:57,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:57,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 36a5507710a8db368e3f50132ff98e27, disabling compactions & flushes 2023-07-21 15:16:57,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. after waiting 0 ms 2023-07-21 15:16:57,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:57,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:57,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 36a5507710a8db368e3f50132ff98e27 move to jenkins-hbase17.apache.org,46091,1689952592464 record at close sequenceid=5 2023-07-21 15:16:57,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:57,765 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=CLOSED 2023-07-21 15:16:57,765 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952617765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952617765"}]},"ts":"1689952617765"} 2023-07-21 15:16:57,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-21 15:16:57,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,43323,1689952592244 in 163 msec 2023-07-21 15:16:57,767 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:57,918 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:57,918 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952617918"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952617918"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952617918"}]},"ts":"1689952617918"} 2023-07-21 15:16:57,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:16:58,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:58,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36a5507710a8db368e3f50132ff98e27, NAME => 'unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:58,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:58,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,077 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,078 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:58,078 DEBUG [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/ut 2023-07-21 15:16:58,078 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36a5507710a8db368e3f50132ff98e27 columnFamilyName ut 2023-07-21 15:16:58,078 INFO [StoreOpener-36a5507710a8db368e3f50132ff98e27-1] regionserver.HStore(310): Store=36a5507710a8db368e3f50132ff98e27/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:58,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:16:58,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 36a5507710a8db368e3f50132ff98e27; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11698221280, jitterRate=0.08948175609111786}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:58,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:16:58,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27., pid=131, masterSystemTime=1689952618071 2023-07-21 15:16:58,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:58,086 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:16:58,086 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=36a5507710a8db368e3f50132ff98e27, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:16:58,086 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689952618086"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952618086"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952618086"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952618086"}]},"ts":"1689952618086"} 2023-07-21 15:16:58,088 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-21 15:16:58,088 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 36a5507710a8db368e3f50132ff98e27, server=jenkins-hbase17.apache.org,46091,1689952592464 in 167 msec 2023-07-21 15:16:58,089 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=36a5507710a8db368e3f50132ff98e27, REOPEN/MOVE in 489 msec 2023-07-21 15:16:58,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-21 15:16:58,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 15:16:58,601 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:58,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43323] to rsgroup default 2023-07-21 15:16:58,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 15:16:58,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:58,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:58,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:58,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:16:58,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 15:16:58,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,43323,1689952592244] are moved back to normal 2023-07-21 15:16:58,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 15:16:58,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:58,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup normal 2023-07-21 15:16:58,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:58,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:58,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:58,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 15:16:58,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:58,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:58,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:58,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:58,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:58,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:58,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:58,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:58,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:58,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:58,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:58,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [testRename] to rsgroup default 2023-07-21 15:16:58,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:58,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:58,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:58,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 15:16:58,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(345): Moving region 0ad7c7c13a1d346732619829706d4f9e to RSGroup default 2023-07-21 15:16:58,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE 2023-07-21 15:16:58,628 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:16:58,628 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE 2023-07-21 15:16:58,629 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:16:58,629 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952618629"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952618629"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952618629"}]},"ts":"1689952618629"} 2023-07-21 15:16:58,630 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,41557,1689952596371}] 2023-07-21 15:16:58,715 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-21 15:16:58,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:58,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0ad7c7c13a1d346732619829706d4f9e, disabling compactions & flushes 2023-07-21 15:16:58,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:58,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:58,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. after waiting 0 ms 2023-07-21 15:16:58,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:58,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:58,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:58,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 0ad7c7c13a1d346732619829706d4f9e move to jenkins-hbase17.apache.org,43323,1689952592244 record at close sequenceid=5 2023-07-21 15:16:58,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:58,792 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=CLOSED 2023-07-21 15:16:58,792 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952618792"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952618792"}]},"ts":"1689952618792"} 2023-07-21 15:16:58,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-21 15:16:58,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,41557,1689952596371 in 164 msec 2023-07-21 15:16:58,795 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:58,946 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:58,946 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:58,946 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952618946"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952618946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952618946"}]},"ts":"1689952618946"} 2023-07-21 15:16:58,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:16:59,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:59,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ad7c7c13a1d346732619829706d4f9e, NAME => 'testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:59,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,104 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,105 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:59,105 DEBUG [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/tr 2023-07-21 15:16:59,106 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ad7c7c13a1d346732619829706d4f9e columnFamilyName tr 2023-07-21 15:16:59,106 INFO [StoreOpener-0ad7c7c13a1d346732619829706d4f9e-1] regionserver.HStore(310): Store=0ad7c7c13a1d346732619829706d4f9e/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:59,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:16:59,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0ad7c7c13a1d346732619829706d4f9e; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11801858560, jitterRate=0.09913372993469238}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:59,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:16:59,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e., pid=134, masterSystemTime=1689952619099 2023-07-21 15:16:59,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:59,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:16:59,114 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0ad7c7c13a1d346732619829706d4f9e, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:16:59,114 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689952619114"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952619114"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952619114"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952619114"}]},"ts":"1689952619114"} 2023-07-21 15:16:59,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-21 15:16:59,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 0ad7c7c13a1d346732619829706d4f9e, server=jenkins-hbase17.apache.org,43323,1689952592244 in 168 msec 2023-07-21 15:16:59,117 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0ad7c7c13a1d346732619829706d4f9e, REOPEN/MOVE in 490 msec 2023-07-21 15:16:59,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-21 15:16:59,628 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 15:16:59,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:59,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:16:59,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 15:16:59,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 15:16:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to newgroup 2023-07-21 15:16:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 15:16:59,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:59,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup newgroup 2023-07-21 15:16:59,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:59,641 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:59,644 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:59,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:59,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:59,649 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:59,652 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,652 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,654 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:59,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953819654, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:59,654 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:59,656 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:59,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,657 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:59,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:59,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,676 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=506 (was 515), OpenFileDescriptor=776 (was 783), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=621 (was 623), ProcessCount=184 (was 184), AvailableMemoryMB=3233 (was 3374) 2023-07-21 15:16:59,678 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-21 15:16:59,693 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=506, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=621, ProcessCount=184, AvailableMemoryMB=3232 2023-07-21 15:16:59,694 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-21 15:16:59,694 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 15:16:59,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:59,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:59,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:59,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:59,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:59,702 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:59,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:59,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:59,711 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:59,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:59,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:59,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:59,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:59,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953819721, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:59,722 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:59,723 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:59,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,724 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:59,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:59,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 15:16:59,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:16:59,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 15:16:59,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 15:16:59,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=bogus 2023-07-21 15:16:59,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup bogus 2023-07-21 15:16:59,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 136.243.18.41:53818 deadline: 1689953819732, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 15:16:59,734 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [bogus:123] to rsgroup bogus 2023-07-21 15:16:59,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 136.243.18.41:53818 deadline: 1689953819734, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 15:16:59,736 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:16:59,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=true 2023-07-21 15:16:59,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//136.243.18.41 balance rsgroup, group=bogus 2023-07-21 15:16:59,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 136.243.18.41:53818 deadline: 1689953819740, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 15:16:59,745 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,745 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,745 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:59,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:59,745 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:59,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:59,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:59,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:59,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:59,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:59,753 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:59,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:59,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:59,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:59,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:59,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953819762, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:59,765 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:59,766 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:59,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,767 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:59,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:59,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,787 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510 (was 506) Potentially hanging thread: hconnection-0x75c83904-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x60d365ca-shared-pool-27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=621 (was 621), ProcessCount=184 (was 184), AvailableMemoryMB=3233 (was 3232) - AvailableMemoryMB LEAK? - 2023-07-21 15:16:59,787 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-21 15:16:59,806 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=621, ProcessCount=184, AvailableMemoryMB=3233 2023-07-21 15:16:59,807 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-21 15:16:59,807 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 15:16:59,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:59,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:59,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:59,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:59,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:59,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:59,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:59,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:59,820 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:59,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:59,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:59,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:59,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:16:59,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:59,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953819831, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:16:59,832 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:59,833 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:59,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,834 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:59,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:59,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:59,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:59,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:59,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:59,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:16:59,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to default 2023-07-21 15:16:59,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:59,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:59,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:59,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,856 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:59,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:59,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:16:59,860 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:59,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-21 15:16:59,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-21 15:16:59,862 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:59,862 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:16:59,863 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:59,863 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:59,865 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 empty. 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 empty. 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 empty. 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 empty. 2023-07-21 15:16:59,869 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 empty. 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:16:59,870 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:16:59,870 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 15:16:59,883 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:59,884 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => ac902d7f9dc574047d02daa1997f7287, NAME => 'Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:59,885 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => de02e33977af87270422585e3e1448a5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:59,885 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2f87b0818400df6a6e6d9fdffdefb602, NAME => 'Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:59,913 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,913 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 2f87b0818400df6a6e6d9fdffdefb602, disabling compactions & flushes 2023-07-21 15:16:59,914 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. after waiting 0 ms 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:16:59,914 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 2f87b0818400df6a6e6d9fdffdefb602: 2023-07-21 15:16:59,914 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => fd64de96c11a3b2462797fc40681d138, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing ac902d7f9dc574047d02daa1997f7287, disabling compactions & flushes 2023-07-21 15:16:59,914 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing de02e33977af87270422585e3e1448a5, disabling compactions & flushes 2023-07-21 15:16:59,914 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:16:59,914 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. after waiting 0 ms 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. after waiting 0 ms 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:16:59,915 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:16:59,915 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for ac902d7f9dc574047d02daa1997f7287: 2023-07-21 15:16:59,915 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for de02e33977af87270422585e3e1448a5: 2023-07-21 15:16:59,915 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 377d3bd2a68dbf22eb9c5403ea1a5a44, NAME => 'Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing fd64de96c11a3b2462797fc40681d138, disabling compactions & flushes 2023-07-21 15:16:59,925 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. after waiting 0 ms 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:16:59,925 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:16:59,925 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for fd64de96c11a3b2462797fc40681d138: 2023-07-21 15:16:59,932 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:59,932 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 377d3bd2a68dbf22eb9c5403ea1a5a44, disabling compactions & flushes 2023-07-21 15:16:59,932 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:16:59,932 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:16:59,933 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. after waiting 0 ms 2023-07-21 15:16:59,933 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:16:59,933 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:16:59,933 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 377d3bd2a68dbf22eb9c5403ea1a5a44: 2023-07-21 15:16:59,935 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:59,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952619936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952619936"}]},"ts":"1689952619936"} 2023-07-21 15:16:59,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952619936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952619936"}]},"ts":"1689952619936"} 2023-07-21 15:16:59,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952619936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952619936"}]},"ts":"1689952619936"} 2023-07-21 15:16:59,937 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952619936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952619936"}]},"ts":"1689952619936"} 2023-07-21 15:16:59,937 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952619936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952619936"}]},"ts":"1689952619936"} 2023-07-21 15:16:59,939 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 15:16:59,940 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:59,940 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952619940"}]},"ts":"1689952619940"} 2023-07-21 15:16:59,942 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 15:16:59,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:59,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:59,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:59,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:59,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, ASSIGN}] 2023-07-21 15:16:59,947 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, ASSIGN 2023-07-21 15:16:59,947 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, ASSIGN 2023-07-21 15:16:59,947 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, ASSIGN 2023-07-21 15:16:59,947 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, ASSIGN 2023-07-21 15:16:59,948 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:59,948 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:59,948 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:59,948 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43323,1689952592244; forceNewPlan=false, retain=false 2023-07-21 15:16:59,948 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, ASSIGN 2023-07-21 15:16:59,949 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46091,1689952592464; forceNewPlan=false, retain=false 2023-07-21 15:16:59,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-21 15:17:00,098 INFO [jenkins-hbase17:33893] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 15:17:00,101 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=377d3bd2a68dbf22eb9c5403ea1a5a44, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,101 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=2f87b0818400df6a6e6d9fdffdefb602, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,101 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=fd64de96c11a3b2462797fc40681d138, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,101 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620101"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620101"}]},"ts":"1689952620101"} 2023-07-21 15:17:00,101 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=ac902d7f9dc574047d02daa1997f7287, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,101 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=de02e33977af87270422585e3e1448a5, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,102 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620101"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620101"}]},"ts":"1689952620101"} 2023-07-21 15:17:00,102 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620101"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620101"}]},"ts":"1689952620101"} 2023-07-21 15:17:00,101 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620101"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620101"}]},"ts":"1689952620101"} 2023-07-21 15:17:00,101 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620101"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620101"}]},"ts":"1689952620101"} 2023-07-21 15:17:00,103 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE; OpenRegionProcedure 377d3bd2a68dbf22eb9c5403ea1a5a44, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure ac902d7f9dc574047d02daa1997f7287, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:17:00,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; OpenRegionProcedure de02e33977af87270422585e3e1448a5, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure fd64de96c11a3b2462797fc40681d138, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:17:00,107 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=137, state=RUNNABLE; OpenRegionProcedure 2f87b0818400df6a6e6d9fdffdefb602, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-21 15:17:00,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f87b0818400df6a6e6d9fdffdefb602, NAME => 'Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 15:17:00,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:00,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,260 INFO [StoreOpener-2f87b0818400df6a6e6d9fdffdefb602-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd64de96c11a3b2462797fc40681d138, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 15:17:00,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:00,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,261 DEBUG [StoreOpener-2f87b0818400df6a6e6d9fdffdefb602-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/f 2023-07-21 15:17:00,261 DEBUG [StoreOpener-2f87b0818400df6a6e6d9fdffdefb602-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/f 2023-07-21 15:17:00,262 INFO [StoreOpener-2f87b0818400df6a6e6d9fdffdefb602-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f87b0818400df6a6e6d9fdffdefb602 columnFamilyName f 2023-07-21 15:17:00,262 INFO [StoreOpener-fd64de96c11a3b2462797fc40681d138-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,262 INFO [StoreOpener-2f87b0818400df6a6e6d9fdffdefb602-1] regionserver.HStore(310): Store=2f87b0818400df6a6e6d9fdffdefb602/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:00,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,263 DEBUG [StoreOpener-fd64de96c11a3b2462797fc40681d138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/f 2023-07-21 15:17:00,263 DEBUG [StoreOpener-fd64de96c11a3b2462797fc40681d138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/f 2023-07-21 15:17:00,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,264 INFO [StoreOpener-fd64de96c11a3b2462797fc40681d138-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd64de96c11a3b2462797fc40681d138 columnFamilyName f 2023-07-21 15:17:00,264 INFO [StoreOpener-fd64de96c11a3b2462797fc40681d138-1] regionserver.HStore(310): Store=fd64de96c11a3b2462797fc40681d138/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:00,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:00,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2f87b0818400df6a6e6d9fdffdefb602; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10774241280, jitterRate=0.003429412841796875}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:00,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2f87b0818400df6a6e6d9fdffdefb602: 2023-07-21 15:17:00,269 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602., pid=145, masterSystemTime=1689952620254 2023-07-21 15:17:00,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:00,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de02e33977af87270422585e3e1448a5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 15:17:00,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened fd64de96c11a3b2462797fc40681d138; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10016547840, jitterRate=-0.06713628768920898}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:00,271 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=2f87b0818400df6a6e6d9fdffdefb602, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for fd64de96c11a3b2462797fc40681d138: 2023-07-21 15:17:00,271 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620270"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952620270"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952620270"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952620270"}]},"ts":"1689952620270"} 2023-07-21 15:17:00,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:00,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,271 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138., pid=144, masterSystemTime=1689952620256 2023-07-21 15:17:00,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac902d7f9dc574047d02daa1997f7287, NAME => 'Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 15:17:00,273 INFO [StoreOpener-de02e33977af87270422585e3e1448a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:00,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,274 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=fd64de96c11a3b2462797fc40681d138, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,274 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620273"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952620273"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952620273"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952620273"}]},"ts":"1689952620273"} 2023-07-21 15:17:00,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=137 2023-07-21 15:17:00,274 INFO [StoreOpener-ac902d7f9dc574047d02daa1997f7287-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=137, state=SUCCESS; OpenRegionProcedure 2f87b0818400df6a6e6d9fdffdefb602, server=jenkins-hbase17.apache.org,43323,1689952592244 in 165 msec 2023-07-21 15:17:00,275 DEBUG [StoreOpener-de02e33977af87270422585e3e1448a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/f 2023-07-21 15:17:00,275 DEBUG [StoreOpener-de02e33977af87270422585e3e1448a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/f 2023-07-21 15:17:00,275 INFO [StoreOpener-de02e33977af87270422585e3e1448a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de02e33977af87270422585e3e1448a5 columnFamilyName f 2023-07-21 15:17:00,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, ASSIGN in 330 msec 2023-07-21 15:17:00,276 INFO [StoreOpener-de02e33977af87270422585e3e1448a5-1] regionserver.HStore(310): Store=de02e33977af87270422585e3e1448a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:00,276 DEBUG [StoreOpener-ac902d7f9dc574047d02daa1997f7287-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/f 2023-07-21 15:17:00,276 DEBUG [StoreOpener-ac902d7f9dc574047d02daa1997f7287-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/f 2023-07-21 15:17:00,277 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-21 15:17:00,277 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure fd64de96c11a3b2462797fc40681d138, server=jenkins-hbase17.apache.org,46091,1689952592464 in 171 msec 2023-07-21 15:17:00,277 INFO [StoreOpener-ac902d7f9dc574047d02daa1997f7287-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac902d7f9dc574047d02daa1997f7287 columnFamilyName f 2023-07-21 15:17:00,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, ASSIGN in 333 msec 2023-07-21 15:17:00,277 INFO [StoreOpener-ac902d7f9dc574047d02daa1997f7287-1] regionserver.HStore(310): Store=ac902d7f9dc574047d02daa1997f7287/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:00,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:00,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened de02e33977af87270422585e3e1448a5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10254957440, jitterRate=-0.044932663440704346}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:00,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for de02e33977af87270422585e3e1448a5: 2023-07-21 15:17:00,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:00,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5., pid=143, masterSystemTime=1689952620254 2023-07-21 15:17:00,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ac902d7f9dc574047d02daa1997f7287; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11756988480, jitterRate=0.09495487809181213}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:00,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ac902d7f9dc574047d02daa1997f7287: 2023-07-21 15:17:00,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287., pid=142, masterSystemTime=1689952620256 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 377d3bd2a68dbf22eb9c5403ea1a5a44, NAME => 'Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 15:17:00,284 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=de02e33977af87270422585e3e1448a5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,284 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620284"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952620284"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952620284"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952620284"}]},"ts":"1689952620284"} 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,285 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=ac902d7f9dc574047d02daa1997f7287, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,285 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620285"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952620285"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952620285"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952620285"}]},"ts":"1689952620285"} 2023-07-21 15:17:00,286 INFO [StoreOpener-377d3bd2a68dbf22eb9c5403ea1a5a44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,287 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-21 15:17:00,288 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; OpenRegionProcedure de02e33977af87270422585e3e1448a5, server=jenkins-hbase17.apache.org,43323,1689952592244 in 181 msec 2023-07-21 15:17:00,288 DEBUG [StoreOpener-377d3bd2a68dbf22eb9c5403ea1a5a44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/f 2023-07-21 15:17:00,288 DEBUG [StoreOpener-377d3bd2a68dbf22eb9c5403ea1a5a44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/f 2023-07-21 15:17:00,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-21 15:17:00,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure ac902d7f9dc574047d02daa1997f7287, server=jenkins-hbase17.apache.org,46091,1689952592464 in 183 msec 2023-07-21 15:17:00,288 INFO [StoreOpener-377d3bd2a68dbf22eb9c5403ea1a5a44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 377d3bd2a68dbf22eb9c5403ea1a5a44 columnFamilyName f 2023-07-21 15:17:00,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, ASSIGN in 344 msec 2023-07-21 15:17:00,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, ASSIGN in 344 msec 2023-07-21 15:17:00,290 INFO [StoreOpener-377d3bd2a68dbf22eb9c5403ea1a5a44-1] regionserver.HStore(310): Store=377d3bd2a68dbf22eb9c5403ea1a5a44/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:00,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:00,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 377d3bd2a68dbf22eb9c5403ea1a5a44; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10773417440, jitterRate=0.0033526867628097534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:00,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 377d3bd2a68dbf22eb9c5403ea1a5a44: 2023-07-21 15:17:00,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44., pid=141, masterSystemTime=1689952620254 2023-07-21 15:17:00,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,298 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=377d3bd2a68dbf22eb9c5403ea1a5a44, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,298 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620298"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952620298"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952620298"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952620298"}]},"ts":"1689952620298"} 2023-07-21 15:17:00,300 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-21 15:17:00,300 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; OpenRegionProcedure 377d3bd2a68dbf22eb9c5403ea1a5a44, server=jenkins-hbase17.apache.org,43323,1689952592244 in 196 msec 2023-07-21 15:17:00,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-21 15:17:00,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, ASSIGN in 356 msec 2023-07-21 15:17:00,302 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:00,302 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952620302"}]},"ts":"1689952620302"} 2023-07-21 15:17:00,303 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 15:17:00,305 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:00,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 448 msec 2023-07-21 15:17:00,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-21 15:17:00,465 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-21 15:17:00,465 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 15:17:00,465 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:00,469 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 15:17:00,470 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:00,470 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 15:17:00,470 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:00,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 15:17:00,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:17:00,478 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 15:17:00,478 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testDisabledTableMove 2023-07-21 15:17:00,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-21 15:17:00,482 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952620482"}]},"ts":"1689952620482"} 2023-07-21 15:17:00,483 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 15:17:00,484 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 15:17:00,485 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, UNASSIGN}] 2023-07-21 15:17:00,488 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, UNASSIGN 2023-07-21 15:17:00,488 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, UNASSIGN 2023-07-21 15:17:00,488 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, UNASSIGN 2023-07-21 15:17:00,488 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, UNASSIGN 2023-07-21 15:17:00,488 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, UNASSIGN 2023-07-21 15:17:00,489 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=de02e33977af87270422585e3e1448a5, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,489 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=fd64de96c11a3b2462797fc40681d138, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,489 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=377d3bd2a68dbf22eb9c5403ea1a5a44, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,489 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620489"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620489"}]},"ts":"1689952620489"} 2023-07-21 15:17:00,489 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620489"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620489"}]},"ts":"1689952620489"} 2023-07-21 15:17:00,489 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=2f87b0818400df6a6e6d9fdffdefb602, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:00,489 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620489"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620489"}]},"ts":"1689952620489"} 2023-07-21 15:17:00,489 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620489"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620489"}]},"ts":"1689952620489"} 2023-07-21 15:17:00,490 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=ac902d7f9dc574047d02daa1997f7287, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:00,490 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620490"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952620490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952620490"}]},"ts":"1689952620490"} 2023-07-21 15:17:00,490 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=150, state=RUNNABLE; CloseRegionProcedure fd64de96c11a3b2462797fc40681d138, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:17:00,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=151, state=RUNNABLE; CloseRegionProcedure 377d3bd2a68dbf22eb9c5403ea1a5a44, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=149, state=RUNNABLE; CloseRegionProcedure de02e33977af87270422585e3e1448a5, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=148, state=RUNNABLE; CloseRegionProcedure 2f87b0818400df6a6e6d9fdffdefb602, server=jenkins-hbase17.apache.org,43323,1689952592244}] 2023-07-21 15:17:00,493 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=147, state=RUNNABLE; CloseRegionProcedure ac902d7f9dc574047d02daa1997f7287, server=jenkins-hbase17.apache.org,46091,1689952592464}] 2023-07-21 15:17:00,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-21 15:17:00,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ac902d7f9dc574047d02daa1997f7287, disabling compactions & flushes 2023-07-21 15:17:00,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. after waiting 0 ms 2023-07-21 15:17:00,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2f87b0818400df6a6e6d9fdffdefb602, disabling compactions & flushes 2023-07-21 15:17:00,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. after waiting 0 ms 2023-07-21 15:17:00,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:00,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:00,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287. 2023-07-21 15:17:00,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ac902d7f9dc574047d02daa1997f7287: 2023-07-21 15:17:00,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602. 2023-07-21 15:17:00,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2f87b0818400df6a6e6d9fdffdefb602: 2023-07-21 15:17:00,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing fd64de96c11a3b2462797fc40681d138, disabling compactions & flushes 2023-07-21 15:17:00,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. after waiting 0 ms 2023-07-21 15:17:00,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,654 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=ac902d7f9dc574047d02daa1997f7287, regionState=CLOSED 2023-07-21 15:17:00,654 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620654"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952620654"}]},"ts":"1689952620654"} 2023-07-21 15:17:00,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 377d3bd2a68dbf22eb9c5403ea1a5a44, disabling compactions & flushes 2023-07-21 15:17:00,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. after waiting 0 ms 2023-07-21 15:17:00,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,656 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=2f87b0818400df6a6e6d9fdffdefb602, regionState=CLOSED 2023-07-21 15:17:00,656 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620656"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952620656"}]},"ts":"1689952620656"} 2023-07-21 15:17:00,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=147 2023-07-21 15:17:00,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=147, state=SUCCESS; CloseRegionProcedure ac902d7f9dc574047d02daa1997f7287, server=jenkins-hbase17.apache.org,46091,1689952592464 in 163 msec 2023-07-21 15:17:00,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=148 2023-07-21 15:17:00,661 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac902d7f9dc574047d02daa1997f7287, UNASSIGN in 174 msec 2023-07-21 15:17:00,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=148, state=SUCCESS; CloseRegionProcedure 2f87b0818400df6a6e6d9fdffdefb602, server=jenkins-hbase17.apache.org,43323,1689952592244 in 165 msec 2023-07-21 15:17:00,662 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f87b0818400df6a6e6d9fdffdefb602, UNASSIGN in 176 msec 2023-07-21 15:17:00,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:00,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:00,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138. 2023-07-21 15:17:00,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44. 2023-07-21 15:17:00,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for fd64de96c11a3b2462797fc40681d138: 2023-07-21 15:17:00,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 377d3bd2a68dbf22eb9c5403ea1a5a44: 2023-07-21 15:17:00,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,667 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=fd64de96c11a3b2462797fc40681d138, regionState=CLOSED 2023-07-21 15:17:00,667 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620667"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952620667"}]},"ts":"1689952620667"} 2023-07-21 15:17:00,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing de02e33977af87270422585e3e1448a5, disabling compactions & flushes 2023-07-21 15:17:00,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,669 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=377d3bd2a68dbf22eb9c5403ea1a5a44, regionState=CLOSED 2023-07-21 15:17:00,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. after waiting 0 ms 2023-07-21 15:17:00,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,669 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952620669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952620669"}]},"ts":"1689952620669"} 2023-07-21 15:17:00,672 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=150 2023-07-21 15:17:00,672 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=150, state=SUCCESS; CloseRegionProcedure fd64de96c11a3b2462797fc40681d138, server=jenkins-hbase17.apache.org,46091,1689952592464 in 180 msec 2023-07-21 15:17:00,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=151 2023-07-21 15:17:00,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=151, state=SUCCESS; CloseRegionProcedure 377d3bd2a68dbf22eb9c5403ea1a5a44, server=jenkins-hbase17.apache.org,43323,1689952592244 in 180 msec 2023-07-21 15:17:00,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:00,673 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd64de96c11a3b2462797fc40681d138, UNASSIGN in 187 msec 2023-07-21 15:17:00,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5. 2023-07-21 15:17:00,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for de02e33977af87270422585e3e1448a5: 2023-07-21 15:17:00,673 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=377d3bd2a68dbf22eb9c5403ea1a5a44, UNASSIGN in 187 msec 2023-07-21 15:17:00,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,675 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=de02e33977af87270422585e3e1448a5, regionState=CLOSED 2023-07-21 15:17:00,675 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952620675"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952620675"}]},"ts":"1689952620675"} 2023-07-21 15:17:00,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=149 2023-07-21 15:17:00,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=149, state=SUCCESS; CloseRegionProcedure de02e33977af87270422585e3e1448a5, server=jenkins-hbase17.apache.org,43323,1689952592244 in 185 msec 2023-07-21 15:17:00,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-21 15:17:00,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=de02e33977af87270422585e3e1448a5, UNASSIGN in 192 msec 2023-07-21 15:17:00,680 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952620680"}]},"ts":"1689952620680"} 2023-07-21 15:17:00,681 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 15:17:00,683 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 15:17:00,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 206 msec 2023-07-21 15:17:00,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-21 15:17:00,784 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-21 15:17:00,784 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:00,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:17:00,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 15:17:00,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1292599758, current retry=0 2023-07-21 15:17:00,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1292599758. 2023-07-21 15:17:00,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:00,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:00,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:00,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 15:17:00,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:17:00,797 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 15:17:00,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testDisabledTableMove 2023-07-21 15:17:00,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:00,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 89 connection: 136.243.18.41:53818 deadline: 1689952680798, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 15:17:00,799 DEBUG [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 15:17:00,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testDisabledTableMove 2023-07-21 15:17:00,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,802 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1292599758' 2023-07-21 15:17:00,804 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:00,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:17:00,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-21 15:17:00,813 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,813 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,813 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,813 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,813 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,815 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/recovered.edits] 2023-07-21 15:17:00,816 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/recovered.edits] 2023-07-21 15:17:00,816 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/recovered.edits] 2023-07-21 15:17:00,816 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/recovered.edits] 2023-07-21 15:17:00,816 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/f, FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/recovered.edits] 2023-07-21 15:17:00,822 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44/recovered.edits/4.seqid 2023-07-21 15:17:00,822 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138/recovered.edits/4.seqid 2023-07-21 15:17:00,822 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287/recovered.edits/4.seqid 2023-07-21 15:17:00,822 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5/recovered.edits/4.seqid 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/recovered.edits/4.seqid to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/archive/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602/recovered.edits/4.seqid 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/fd64de96c11a3b2462797fc40681d138 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/377d3bd2a68dbf22eb9c5403ea1a5a44 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/ac902d7f9dc574047d02daa1997f7287 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/2f87b0818400df6a6e6d9fdffdefb602 2023-07-21 15:17:00,823 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/.tmp/data/default/Group_testDisabledTableMove/de02e33977af87270422585e3e1448a5 2023-07-21 15:17:00,823 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 15:17:00,825 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,827 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 15:17:00,832 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 15:17:00,833 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,833 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 15:17:00,833 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952620833"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,833 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952620833"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,834 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689952619857.de02e33977af87270422585e3e1448a5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952620833"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,834 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952620833"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,834 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952620833"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,835 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 15:17:00,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ac902d7f9dc574047d02daa1997f7287, NAME => 'Group_testDisabledTableMove,,1689952619857.ac902d7f9dc574047d02daa1997f7287.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2f87b0818400df6a6e6d9fdffdefb602, NAME => 'Group_testDisabledTableMove,aaaaa,1689952619857.2f87b0818400df6a6e6d9fdffdefb602.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => de02e33977af87270422585e3e1448a5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689952619857.de02e33977af87270422585e3e1448a5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => fd64de96c11a3b2462797fc40681d138, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689952619857.fd64de96c11a3b2462797fc40681d138.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 377d3bd2a68dbf22eb9c5403ea1a5a44, NAME => 'Group_testDisabledTableMove,zzzzz,1689952619857.377d3bd2a68dbf22eb9c5403ea1a5a44.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 15:17:00,835 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 15:17:00,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952620835"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:00,837 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 15:17:00,838 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 15:17:00,839 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 39 msec 2023-07-21 15:17:00,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-21 15:17:00,911 INFO [Listener at localhost.localdomain/34137] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-21 15:17:00,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:00,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:00,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:00,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:00,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:00,916 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557] to rsgroup default 2023-07-21 15:17:00,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:00,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:17:00,920 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1292599758, current retry=0 2023-07-21 15:17:00,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,37121,1689952592049, jenkins-hbase17.apache.org,41557,1689952596371] are moved back to Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1292599758 => default 2023-07-21 15:17:00,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:00,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testDisabledTableMove_1292599758 2023-07-21 15:17:00,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:00,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:17:00,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:00,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:00,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:00,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:00,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:00,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:00,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:00,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:00,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:00,933 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:00,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:00,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:00,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:00,938 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:00,940 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:00,940 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:00,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:17:00,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:00,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953820942, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:17:00,943 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:00,944 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:00,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:00,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:00,945 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:00,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:00,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:00,964 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512 (was 510) Potentially hanging thread: hconnection-0x5a0ffa86-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x75c83904-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1886360740_17 at /127.0.0.1:55074 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-348314727_17 at /127.0.0.1:37998 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=805 (was 776) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=580 (was 621), ProcessCount=184 (was 184), AvailableMemoryMB=3229 (was 3233) 2023-07-21 15:17:00,964 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:17:00,982 INFO [Listener at localhost.localdomain/34137] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=512, OpenFileDescriptor=805, MaxFileDescriptor=60000, SystemLoadAverage=580, ProcessCount=184, AvailableMemoryMB=3228 2023-07-21 15:17:00,982 WARN [Listener at localhost.localdomain/34137] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:17:00,984 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 15:17:00,989 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:00,989 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:00,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:00,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:00,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:00,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:00,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:00,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:00,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:00,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:01,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:01,003 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:01,004 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:01,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:01,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:01,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:01,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:01,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:01,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:01,015 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33893] to rsgroup master 2023-07-21 15:17:01,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:01,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:53818 deadline: 1689953821015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. 2023-07-21 15:17:01,016 WARN [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:33893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:01,018 INFO [Listener at localhost.localdomain/34137] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:01,019 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:01,019 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:01,019 INFO [Listener at localhost.localdomain/34137] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:37121, jenkins-hbase17.apache.org:41557, jenkins-hbase17.apache.org:43323, jenkins-hbase17.apache.org:46091], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:01,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:01,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:01,021 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 15:17:01,021 INFO [Listener at localhost.localdomain/34137] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:17:01,021 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x76791164 to 127.0.0.1:64886 2023-07-21 15:17:01,022 DEBUG [Listener at localhost.localdomain/34137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,028 DEBUG [Listener at localhost.localdomain/34137] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:17:01,029 DEBUG [Listener at localhost.localdomain/34137] util.JVMClusterUtil(257): Found active master hash=1680768939, stopped=false 2023-07-21 15:17:01,029 DEBUG [Listener at localhost.localdomain/34137] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:17:01,029 DEBUG [Listener at localhost.localdomain/34137] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:17:01,029 INFO [Listener at localhost.localdomain/34137] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:01,031 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:01,031 INFO [Listener at localhost.localdomain/34137] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:17:01,031 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:01,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:01,032 DEBUG [Listener at localhost.localdomain/34137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0cc3871e to 127.0.0.1:64886 2023-07-21 15:17:01,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:01,032 DEBUG [Listener at localhost.localdomain/34137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,37121,1689952592049' ***** 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43323,1689952592244' ***** 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:01,035 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,46091,1689952592464' ***** 2023-07-21 15:17:01,035 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:01,035 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:01,036 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41557,1689952596371' ***** 2023-07-21 15:17:01,036 INFO [Listener at localhost.localdomain/34137] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:01,036 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:01,037 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:01,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:01,058 INFO [RS:2;jenkins-hbase17:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d533b2e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:01,058 INFO [RS:3;jenkins-hbase17:41557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5bd7650f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:01,058 INFO [RS:1;jenkins-hbase17:43323] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a9bfc3c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:01,058 INFO [RS:0;jenkins-hbase17:37121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@434024b7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:01,062 INFO [RS:0;jenkins-hbase17:37121] server.AbstractConnector(383): Stopped ServerConnector@6140646b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,062 INFO [RS:1;jenkins-hbase17:43323] server.AbstractConnector(383): Stopped ServerConnector@3de2d87a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,062 INFO [RS:2;jenkins-hbase17:46091] server.AbstractConnector(383): Stopped ServerConnector@21464387{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,062 INFO [RS:3;jenkins-hbase17:41557] server.AbstractConnector(383): Stopped ServerConnector@4bb7d136{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,063 INFO [RS:2;jenkins-hbase17:46091] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:01,063 INFO [RS:1;jenkins-hbase17:43323] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:01,063 INFO [RS:0;jenkins-hbase17:37121] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:01,063 INFO [RS:3;jenkins-hbase17:41557] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:01,065 INFO [RS:2;jenkins-hbase17:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5160d086{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:01,065 INFO [RS:1;jenkins-hbase17:43323] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53519359{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:01,068 INFO [RS:2;jenkins-hbase17:46091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@747ccb69{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:01,067 INFO [RS:3;jenkins-hbase17:41557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29f60ee1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:01,066 INFO [RS:0;jenkins-hbase17:37121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2dbe3bcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:01,070 INFO [RS:3;jenkins-hbase17:41557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@726beee5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:01,069 INFO [RS:1;jenkins-hbase17:43323] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50290df8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:01,071 INFO [RS:0;jenkins-hbase17:37121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d1a76c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:01,073 INFO [RS:2;jenkins-hbase17:46091] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:01,073 INFO [RS:2;jenkins-hbase17:46091] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:01,073 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:01,074 INFO [RS:2;jenkins-hbase17:46091] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:01,074 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(3305): Received CLOSE for a1be046ee9a2834d581cd55948dca519 2023-07-21 15:17:01,074 INFO [RS:3;jenkins-hbase17:41557] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:01,074 INFO [RS:0;jenkins-hbase17:37121] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:01,074 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:01,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a1be046ee9a2834d581cd55948dca519, disabling compactions & flushes 2023-07-21 15:17:01,074 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:01,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:17:01,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:17:01,074 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(3305): Received CLOSE for 6b64a17fcefcc8a68fcd4f0dcc651985 2023-07-21 15:17:01,075 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(3305): Received CLOSE for 36a5507710a8db368e3f50132ff98e27 2023-07-21 15:17:01,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. after waiting 0 ms 2023-07-21 15:17:01,075 INFO [RS:3;jenkins-hbase17:41557] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:01,075 INFO [RS:0;jenkins-hbase17:37121] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:01,075 INFO [RS:3;jenkins-hbase17:41557] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:01,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:17:01,075 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:01,075 DEBUG [RS:2;jenkins-hbase17:46091] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x70ce3fce to 127.0.0.1:64886 2023-07-21 15:17:01,075 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:17:01,075 INFO [RS:0;jenkins-hbase17:37121] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:01,075 DEBUG [RS:2;jenkins-hbase17:46091] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,075 DEBUG [RS:3;jenkins-hbase17:41557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5562d093 to 127.0.0.1:64886 2023-07-21 15:17:01,075 INFO [RS:2;jenkins-hbase17:46091] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:01,075 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing a1be046ee9a2834d581cd55948dca519 1/1 column families, dataSize=22.37 KB heapSize=36.89 KB 2023-07-21 15:17:01,075 INFO [RS:2;jenkins-hbase17:46091] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:01,076 INFO [RS:1;jenkins-hbase17:43323] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:01,076 INFO [RS:1;jenkins-hbase17:43323] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:01,076 INFO [RS:1;jenkins-hbase17:43323] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:01,076 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(3305): Received CLOSE for 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:17:01,076 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:01,075 DEBUG [RS:3;jenkins-hbase17:41557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,076 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41557,1689952596371; all regions closed. 2023-07-21 15:17:01,075 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:17:01,077 DEBUG [RS:0;jenkins-hbase17:37121] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x312a4151 to 127.0.0.1:64886 2023-07-21 15:17:01,077 DEBUG [RS:0;jenkins-hbase17:37121] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,077 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,37121,1689952592049; all regions closed. 2023-07-21 15:17:01,076 DEBUG [RS:1;jenkins-hbase17:43323] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34ea0ffe to 127.0.0.1:64886 2023-07-21 15:17:01,076 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:01,077 DEBUG [RS:1;jenkins-hbase17:43323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,076 INFO [RS:2;jenkins-hbase17:46091] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:01,078 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:17:01,078 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:17:01,078 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1478): Online Regions={0ad7c7c13a1d346732619829706d4f9e=testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e.} 2023-07-21 15:17:01,079 DEBUG [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1504): Waiting on 0ad7c7c13a1d346732619829706d4f9e 2023-07-21 15:17:01,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0ad7c7c13a1d346732619829706d4f9e, disabling compactions & flushes 2023-07-21 15:17:01,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:17:01,083 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 15:17:01,083 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:17:01,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:17:01,083 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1478): Online Regions={a1be046ee9a2834d581cd55948dca519=hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519., 6b64a17fcefcc8a68fcd4f0dcc651985=hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985., 1588230740=hbase:meta,,1.1588230740, 36a5507710a8db368e3f50132ff98e27=unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27.} 2023-07-21 15:17:01,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. after waiting 1 ms 2023-07-21 15:17:01,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:17:01,088 DEBUG [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1504): Waiting on 1588230740, 36a5507710a8db368e3f50132ff98e27, 6b64a17fcefcc8a68fcd4f0dcc651985, a1be046ee9a2834d581cd55948dca519 2023-07-21 15:17:01,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:17:01,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:17:01,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:17:01,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:17:01,089 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.83 KB heapSize=122.54 KB 2023-07-21 15:17:01,111 DEBUG [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,111 INFO [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37121%2C1689952592049.meta:.meta(num 1689952594950) 2023-07-21 15:17:01,112 DEBUG [RS:3;jenkins-hbase17:41557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,112 INFO [RS:3;jenkins-hbase17:41557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C41557%2C1689952596371:(num 1689952596772) 2023-07-21 15:17:01,112 DEBUG [RS:3;jenkins-hbase17:41557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,113 INFO [RS:3;jenkins-hbase17:41557] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/testRename/0ad7c7c13a1d346732619829706d4f9e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 15:17:01,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:17:01,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0ad7c7c13a1d346732619829706d4f9e: 2023-07-21 15:17:01,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689952614184.0ad7c7c13a1d346732619829706d4f9e. 2023-07-21 15:17:01,118 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,118 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,119 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,122 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,140 INFO [RS:3;jenkins-hbase17:41557] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:01,140 INFO [RS:3;jenkins-hbase17:41557] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:01,141 INFO [RS:3;jenkins-hbase17:41557] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:01,141 INFO [RS:3;jenkins-hbase17:41557] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:01,141 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:01,142 INFO [RS:3;jenkins-hbase17:41557] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41557 2023-07-21 15:17:01,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.37 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/2d1e9e71e1074054b3428acb1e5ddab7 2023-07-21 15:17:01,161 DEBUG [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,161 INFO [RS:0;jenkins-hbase17:37121] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C37121%2C1689952592049:(num 1689952594846) 2023-07-21 15:17:01,161 DEBUG [RS:0;jenkins-hbase17:37121] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,161 INFO [RS:0;jenkins-hbase17:37121] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,161 INFO [RS:0;jenkins-hbase17:37121] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:01,162 INFO [RS:0;jenkins-hbase17:37121] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:01,162 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:01,162 INFO [RS:0;jenkins-hbase17:37121] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:01,163 INFO [RS:0;jenkins-hbase17:37121] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:01,164 INFO [RS:0;jenkins-hbase17:37121] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:37121 2023-07-21 15:17:01,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d1e9e71e1074054b3428acb1e5ddab7 2023-07-21 15:17:01,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=72.02 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/info/6946fc1e9faf4683881a05fc5dc17ff9 2023-07-21 15:17:01,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/.tmp/m/2d1e9e71e1074054b3428acb1e5ddab7 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/2d1e9e71e1074054b3428acb1e5ddab7 2023-07-21 15:17:01,174 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d1e9e71e1074054b3428acb1e5ddab7 2023-07-21 15:17:01,174 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6946fc1e9faf4683881a05fc5dc17ff9 2023-07-21 15:17:01,174 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/m/2d1e9e71e1074054b3428acb1e5ddab7, entries=22, sequenceid=107, filesize=5.9 K 2023-07-21 15:17:01,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.37 KB/22907, heapSize ~36.88 KB/37760, currentSize=0 B/0 for a1be046ee9a2834d581cd55948dca519 in 100ms, sequenceid=107, compaction requested=true 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,179 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,37121,1689952592049 2023-07-21 15:17:01,180 ERROR [Listener at localhost.localdomain/34137-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@7d92f5f9 rejected from java.util.concurrent.ThreadPoolExecutor@79ad6e93[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-21 15:17:01,180 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,180 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41557,1689952596371 2023-07-21 15:17:01,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/rsgroup/a1be046ee9a2834d581cd55948dca519/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-21 15:17:01,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:01,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a1be046ee9a2834d581cd55948dca519: 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952595421.a1be046ee9a2834d581cd55948dca519. 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 6b64a17fcefcc8a68fcd4f0dcc651985, disabling compactions & flushes 2023-07-21 15:17:01,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. after waiting 0 ms 2023-07-21 15:17:01,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:17:01,213 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/rep_barrier/51ea6e12a4234ff6a5ce94f67ecd5a31 2023-07-21 15:17:01,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/namespace/6b64a17fcefcc8a68fcd4f0dcc651985/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-21 15:17:01,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:17:01,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 6b64a17fcefcc8a68fcd4f0dcc651985: 2023-07-21 15:17:01,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952595208.6b64a17fcefcc8a68fcd4f0dcc651985. 2023-07-21 15:17:01,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 36a5507710a8db368e3f50132ff98e27, disabling compactions & flushes 2023-07-21 15:17:01,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:17:01,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:17:01,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. after waiting 0 ms 2023-07-21 15:17:01,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:17:01,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/default/unmovedTable/36a5507710a8db368e3f50132ff98e27/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 15:17:01,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:17:01,221 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51ea6e12a4234ff6a5ce94f67ecd5a31 2023-07-21 15:17:01,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 36a5507710a8db368e3f50132ff98e27: 2023-07-21 15:17:01,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689952615846.36a5507710a8db368e3f50132ff98e27. 2023-07-21 15:17:01,235 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/table/20ae08eb473b4359beee78c7a527c68f 2023-07-21 15:17:01,240 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 20ae08eb473b4359beee78c7a527c68f 2023-07-21 15:17:01,241 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/info/6946fc1e9faf4683881a05fc5dc17ff9 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info/6946fc1e9faf4683881a05fc5dc17ff9 2023-07-21 15:17:01,247 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6946fc1e9faf4683881a05fc5dc17ff9 2023-07-21 15:17:01,247 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/info/6946fc1e9faf4683881a05fc5dc17ff9, entries=97, sequenceid=214, filesize=16.0 K 2023-07-21 15:17:01,248 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/rep_barrier/51ea6e12a4234ff6a5ce94f67ecd5a31 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier/51ea6e12a4234ff6a5ce94f67ecd5a31 2023-07-21 15:17:01,253 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51ea6e12a4234ff6a5ce94f67ecd5a31 2023-07-21 15:17:01,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/rep_barrier/51ea6e12a4234ff6a5ce94f67ecd5a31, entries=18, sequenceid=214, filesize=6.9 K 2023-07-21 15:17:01,255 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/.tmp/table/20ae08eb473b4359beee78c7a527c68f as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table/20ae08eb473b4359beee78c7a527c68f 2023-07-21 15:17:01,261 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 20ae08eb473b4359beee78c7a527c68f 2023-07-21 15:17:01,261 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/table/20ae08eb473b4359beee78c7a527c68f, entries=27, sequenceid=214, filesize=7.2 K 2023-07-21 15:17:01,262 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.83 KB/79693, heapSize ~122.49 KB/125432, currentSize=0 B/0 for 1588230740 in 174ms, sequenceid=214, compaction requested=false 2023-07-21 15:17:01,271 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/data/hbase/meta/1588230740/recovered.edits/217.seqid, newMaxSeqId=217, maxSeqId=19 2023-07-21 15:17:01,272 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:01,273 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:01,273 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:17:01,273 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:01,280 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43323,1689952592244; all regions closed. 2023-07-21 15:17:01,281 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41557,1689952596371] 2023-07-21 15:17:01,281 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41557,1689952596371; numProcessing=1 2023-07-21 15:17:01,283 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41557,1689952596371 already deleted, retry=false 2023-07-21 15:17:01,283 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41557,1689952596371 expired; onlineServers=3 2023-07-21 15:17:01,283 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,37121,1689952592049] 2023-07-21 15:17:01,283 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,37121,1689952592049; numProcessing=2 2023-07-21 15:17:01,285 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,37121,1689952592049 already deleted, retry=false 2023-07-21 15:17:01,285 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,37121,1689952592049 expired; onlineServers=2 2023-07-21 15:17:01,287 DEBUG [RS:1;jenkins-hbase17:43323] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,287 INFO [RS:1;jenkins-hbase17:43323] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C43323%2C1689952592244:(num 1689952594846) 2023-07-21 15:17:01,287 DEBUG [RS:1;jenkins-hbase17:43323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,288 INFO [RS:1;jenkins-hbase17:43323] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,288 INFO [RS:1;jenkins-hbase17:43323] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:01,288 INFO [RS:1;jenkins-hbase17:43323] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:01,288 INFO [RS:1;jenkins-hbase17:43323] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:01,288 INFO [RS:1;jenkins-hbase17:43323] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:01,288 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:01,288 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,46091,1689952592464; all regions closed. 2023-07-21 15:17:01,289 INFO [RS:1;jenkins-hbase17:43323] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43323 2023-07-21 15:17:01,291 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:01,297 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43323,1689952592244 2023-07-21 15:17:01,291 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,299 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43323,1689952592244] 2023-07-21 15:17:01,299 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43323,1689952592244; numProcessing=3 2023-07-21 15:17:01,302 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43323,1689952592244 already deleted, retry=false 2023-07-21 15:17:01,302 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43323,1689952592244 expired; onlineServers=1 2023-07-21 15:17:01,303 DEBUG [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,303 INFO [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C46091%2C1689952592464.meta:.meta(num 1689952597607) 2023-07-21 15:17:01,309 DEBUG [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/oldWALs 2023-07-21 15:17:01,310 INFO [RS:2;jenkins-hbase17:46091] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C46091%2C1689952592464:(num 1689952594854) 2023-07-21 15:17:01,310 DEBUG [RS:2;jenkins-hbase17:46091] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,310 INFO [RS:2;jenkins-hbase17:46091] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:01,310 INFO [RS:2;jenkins-hbase17:46091] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:01,310 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:01,311 INFO [RS:2;jenkins-hbase17:46091] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:46091 2023-07-21 15:17:01,399 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,399 INFO [RS:1;jenkins-hbase17:43323] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43323,1689952592244; zookeeper connection closed. 2023-07-21 15:17:01,399 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:43323-0x10188738f0a0002, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,400 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@622befaa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@622befaa 2023-07-21 15:17:01,400 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:01,400 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46091,1689952592464 2023-07-21 15:17:01,401 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,46091,1689952592464] 2023-07-21 15:17:01,401 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,46091,1689952592464; numProcessing=4 2023-07-21 15:17:01,402 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,46091,1689952592464 already deleted, retry=false 2023-07-21 15:17:01,402 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,46091,1689952592464 expired; onlineServers=0 2023-07-21 15:17:01,402 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33893,1689952589806' ***** 2023-07-21 15:17:01,402 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:17:01,403 DEBUG [M:0;jenkins-hbase17:33893] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25137590, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:01,403 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:01,405 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:01,405 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:01,405 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:01,406 INFO [M:0;jenkins-hbase17:33893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f230d51{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:17:01,406 INFO [M:0;jenkins-hbase17:33893] server.AbstractConnector(383): Stopped ServerConnector@55f514a2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,406 INFO [M:0;jenkins-hbase17:33893] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:01,407 INFO [M:0;jenkins-hbase17:33893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1772dcd7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:01,407 INFO [M:0;jenkins-hbase17:33893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b4a998c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:01,408 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33893,1689952589806 2023-07-21 15:17:01,408 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33893,1689952589806; all regions closed. 2023-07-21 15:17:01,408 DEBUG [M:0;jenkins-hbase17:33893] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:01,408 INFO [M:0;jenkins-hbase17:33893] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:17:01,409 INFO [M:0;jenkins-hbase17:33893] server.AbstractConnector(383): Stopped ServerConnector@77265d22{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:01,409 DEBUG [M:0;jenkins-hbase17:33893] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:17:01,409 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:17:01,409 DEBUG [M:0;jenkins-hbase17:33893] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:17:01,409 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952594321] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952594321,5,FailOnTimeoutGroup] 2023-07-21 15:17:01,409 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952594312] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952594312,5,FailOnTimeoutGroup] 2023-07-21 15:17:01,409 INFO [M:0;jenkins-hbase17:33893] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:17:01,409 INFO [M:0;jenkins-hbase17:33893] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:17:01,409 INFO [M:0;jenkins-hbase17:33893] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 15:17:01,410 DEBUG [M:0;jenkins-hbase17:33893] master.HMaster(1512): Stopping service threads 2023-07-21 15:17:01,410 INFO [M:0;jenkins-hbase17:33893] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:17:01,410 ERROR [M:0;jenkins-hbase17:33893] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 15:17:01,411 INFO [M:0;jenkins-hbase17:33893] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:17:01,411 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:17:01,411 DEBUG [M:0;jenkins-hbase17:33893] zookeeper.ZKUtil(398): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:17:01,411 WARN [M:0;jenkins-hbase17:33893] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:17:01,411 INFO [M:0;jenkins-hbase17:33893] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:17:01,412 INFO [M:0;jenkins-hbase17:33893] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:17:01,412 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:17:01,412 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:01,412 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:01,412 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:17:01,412 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:01,412 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=529.89 KB heapSize=634.27 KB 2023-07-21 15:17:01,426 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,426 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:41557-0x10188738f0a000b, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,426 INFO [RS:3;jenkins-hbase17:41557] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41557,1689952596371; zookeeper connection closed. 2023-07-21 15:17:01,427 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d8b023a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d8b023a 2023-07-21 15:17:01,430 INFO [M:0;jenkins-hbase17:33893] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=529.89 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2bfb9e83644846979a2076947d66bb49 2023-07-21 15:17:01,436 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2bfb9e83644846979a2076947d66bb49 as hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2bfb9e83644846979a2076947d66bb49 2023-07-21 15:17:01,442 INFO [M:0;jenkins-hbase17:33893] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2bfb9e83644846979a2076947d66bb49, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-21 15:17:01,443 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegion(2948): Finished flush of dataSize ~529.89 KB/542605, heapSize ~634.25 KB/649472, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1176, compaction requested=false 2023-07-21 15:17:01,445 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:01,445 DEBUG [M:0;jenkins-hbase17:33893] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:01,449 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/MasterData/WALs/jenkins-hbase17.apache.org,33893,1689952589806/jenkins-hbase17.apache.org%2C33893%2C1689952589806.1689952593182 not finished, retry = 0 2023-07-21 15:17:01,526 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,527 INFO [RS:0;jenkins-hbase17:37121] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,37121,1689952592049; zookeeper connection closed. 2023-07-21 15:17:01,527 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:37121-0x10188738f0a0001, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,527 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3fd93e25] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3fd93e25 2023-07-21 15:17:01,550 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:01,550 INFO [M:0;jenkins-hbase17:33893] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:17:01,551 INFO [M:0;jenkins-hbase17:33893] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33893 2023-07-21 15:17:01,553 DEBUG [M:0;jenkins-hbase17:33893] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,33893,1689952589806 already deleted, retry=false 2023-07-21 15:17:01,727 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,727 INFO [M:0;jenkins-hbase17:33893] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33893,1689952589806; zookeeper connection closed. 2023-07-21 15:17:01,727 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): master:33893-0x10188738f0a0000, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,752 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:17:01,827 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,827 INFO [RS:2;jenkins-hbase17:46091] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,46091,1689952592464; zookeeper connection closed. 2023-07-21 15:17:01,827 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): regionserver:46091-0x10188738f0a0003, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:01,828 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a529648] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a529648 2023-07-21 15:17:01,828 INFO [Listener at localhost.localdomain/34137] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 15:17:01,828 WARN [Listener at localhost.localdomain/34137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:01,832 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:01,935 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:01,935 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-710562131-136.243.18.41-1689952586084 (Datanode Uuid c42877a0-fda7-47c4-a5c8-bd72c0cc32f8) service to localhost.localdomain/127.0.0.1:41491 2023-07-21 15:17:01,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data5/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:01,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data6/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:01,939 WARN [Listener at localhost.localdomain/34137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:01,942 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:02,044 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:02,045 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-710562131-136.243.18.41-1689952586084 (Datanode Uuid d09cad6b-d2ee-437e-ab86-6ce6541d1774) service to localhost.localdomain/127.0.0.1:41491 2023-07-21 15:17:02,045 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data3/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:02,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data4/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:02,047 WARN [Listener at localhost.localdomain/34137] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:02,051 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:02,157 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:02,157 WARN [BP-710562131-136.243.18.41-1689952586084 heartbeating to localhost.localdomain/127.0.0.1:41491] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-710562131-136.243.18.41-1689952586084 (Datanode Uuid bd7d86aa-58e1-4e61-a33e-7d0cbfdebd94) service to localhost.localdomain/127.0.0.1:41491 2023-07-21 15:17:02,158 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data1/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:02,158 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/cluster_fd0365b2-0694-66bf-0d11-422a312a0d63/dfs/data/data2/current/BP-710562131-136.243.18.41-1689952586084] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:02,197 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 15:17:02,323 INFO [Listener at localhost.localdomain/34137] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 15:17:02,405 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 15:17:02,405 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 15:17:02,405 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.log.dir so I do NOT create it in target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99 2023-07-21 15:17:02,405 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ff73c18-489c-d331-b9be-1115b1915e6b/hadoop.tmp.dir so I do NOT create it in target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99 2023-07-21 15:17:02,406 INFO [Listener at localhost.localdomain/34137] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c, deleteOnExit=true 2023-07-21 15:17:02,406 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 15:17:02,406 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/test.cache.data in system properties and HBase conf 2023-07-21 15:17:02,406 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 15:17:02,406 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir in system properties and HBase conf 2023-07-21 15:17:02,407 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 15:17:02,407 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 15:17:02,407 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 15:17:02,407 DEBUG [Listener at localhost.localdomain/34137] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 15:17:02,408 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:17:02,408 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:17:02,408 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:17:02,409 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 15:17:02,410 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/nfs.dump.dir in system properties and HBase conf 2023-07-21 15:17:02,410 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir in system properties and HBase conf 2023-07-21 15:17:02,410 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:17:02,410 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 15:17:02,410 INFO [Listener at localhost.localdomain/34137] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 15:17:02,415 WARN [Listener at localhost.localdomain/34137] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:17:02,415 WARN [Listener at localhost.localdomain/34137] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:17:02,418 DEBUG [Listener at localhost.localdomain/34137-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10188738f0a000a, quorum=127.0.0.1:64886, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 15:17:02,418 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10188738f0a000a, quorum=127.0.0.1:64886, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 15:17:02,520 WARN [Listener at localhost.localdomain/34137] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:02,525 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:02,537 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/Jetty_localhost_localdomain_39317_hdfs____.ic1hv7/webapp 2023-07-21 15:17:02,668 INFO [Listener at localhost.localdomain/34137] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:39317 2023-07-21 15:17:02,673 WARN [Listener at localhost.localdomain/34137] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:17:02,673 WARN [Listener at localhost.localdomain/34137] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:17:02,761 WARN [Listener at localhost.localdomain/42415] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:02,794 WARN [Listener at localhost.localdomain/42415] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:02,796 WARN [Listener at localhost.localdomain/42415] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:02,797 INFO [Listener at localhost.localdomain/42415] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:02,809 INFO [Listener at localhost.localdomain/42415] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/Jetty_localhost_40545_datanode____92aich/webapp 2023-07-21 15:17:02,898 INFO [Listener at localhost.localdomain/42415] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40545 2023-07-21 15:17:02,909 WARN [Listener at localhost.localdomain/35321] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:02,962 WARN [Listener at localhost.localdomain/35321] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:02,964 WARN [Listener at localhost.localdomain/35321] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:02,965 INFO [Listener at localhost.localdomain/35321] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:02,971 INFO [Listener at localhost.localdomain/35321] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/Jetty_localhost_39941_datanode____ttwk6v/webapp 2023-07-21 15:17:03,066 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x738313dbe19061e: Processing first storage report for DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf from datanode 72af5e99-7e8e-4861-8c39-43c771b99009 2023-07-21 15:17:03,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x738313dbe19061e: from storage DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf node DatanodeRegistration(127.0.0.1:46871, datanodeUuid=72af5e99-7e8e-4861-8c39-43c771b99009, infoPort=36507, infoSecurePort=0, ipcPort=35321, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,067 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x738313dbe19061e: Processing first storage report for DS-2a70b8a0-0e71-4fa4-a53c-55507834509a from datanode 72af5e99-7e8e-4861-8c39-43c771b99009 2023-07-21 15:17:03,067 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x738313dbe19061e: from storage DS-2a70b8a0-0e71-4fa4-a53c-55507834509a node DatanodeRegistration(127.0.0.1:46871, datanodeUuid=72af5e99-7e8e-4861-8c39-43c771b99009, infoPort=36507, infoSecurePort=0, ipcPort=35321, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,114 INFO [Listener at localhost.localdomain/35321] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39941 2023-07-21 15:17:03,122 WARN [Listener at localhost.localdomain/37053] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:03,154 WARN [Listener at localhost.localdomain/37053] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:03,156 WARN [Listener at localhost.localdomain/37053] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:03,158 INFO [Listener at localhost.localdomain/37053] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:03,164 INFO [Listener at localhost.localdomain/37053] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/Jetty_localhost_33417_datanode____mtv5k1/webapp 2023-07-21 15:17:03,207 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10cb7ccfd02d6582: Processing first storage report for DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a from datanode 4e7b30d2-3dda-43e6-a783-4374c20ea423 2023-07-21 15:17:03,207 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10cb7ccfd02d6582: from storage DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a node DatanodeRegistration(127.0.0.1:36425, datanodeUuid=4e7b30d2-3dda-43e6-a783-4374c20ea423, infoPort=34371, infoSecurePort=0, ipcPort=37053, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,207 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10cb7ccfd02d6582: Processing first storage report for DS-3d4e06d4-c7a9-456c-b377-11a5fe6b89db from datanode 4e7b30d2-3dda-43e6-a783-4374c20ea423 2023-07-21 15:17:03,207 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10cb7ccfd02d6582: from storage DS-3d4e06d4-c7a9-456c-b377-11a5fe6b89db node DatanodeRegistration(127.0.0.1:36425, datanodeUuid=4e7b30d2-3dda-43e6-a783-4374c20ea423, infoPort=34371, infoSecurePort=0, ipcPort=37053, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,265 INFO [Listener at localhost.localdomain/37053] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33417 2023-07-21 15:17:03,286 WARN [Listener at localhost.localdomain/37143] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:03,454 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7de0307c1e03d55: Processing first storage report for DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6 from datanode 2df4f3e5-a45e-403f-ba8d-b25f3a6da5a1 2023-07-21 15:17:03,454 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7de0307c1e03d55: from storage DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6 node DatanodeRegistration(127.0.0.1:37923, datanodeUuid=2df4f3e5-a45e-403f-ba8d-b25f3a6da5a1, infoPort=45487, infoSecurePort=0, ipcPort=37143, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,454 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7de0307c1e03d55: Processing first storage report for DS-82339865-5bcb-4cbf-a0d6-26d3cdff693f from datanode 2df4f3e5-a45e-403f-ba8d-b25f3a6da5a1 2023-07-21 15:17:03,454 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7de0307c1e03d55: from storage DS-82339865-5bcb-4cbf-a0d6-26d3cdff693f node DatanodeRegistration(127.0.0.1:37923, datanodeUuid=2df4f3e5-a45e-403f-ba8d-b25f3a6da5a1, infoPort=45487, infoSecurePort=0, ipcPort=37143, storageInfo=lv=-57;cid=testClusterID;nsid=154258534;c=1689952622418), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:03,553 DEBUG [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99 2023-07-21 15:17:03,557 INFO [Listener at localhost.localdomain/37143] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/zookeeper_0, clientPort=60449, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 15:17:03,558 INFO [Listener at localhost.localdomain/37143] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60449 2023-07-21 15:17:03,559 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,560 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,585 INFO [Listener at localhost.localdomain/37143] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8 with version=8 2023-07-21 15:17:03,585 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/hbase-staging 2023-07-21 15:17:03,586 DEBUG [Listener at localhost.localdomain/37143] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 15:17:03,586 DEBUG [Listener at localhost.localdomain/37143] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 15:17:03,586 DEBUG [Listener at localhost.localdomain/37143] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 15:17:03,586 DEBUG [Listener at localhost.localdomain/37143] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 15:17:03,587 INFO [Listener at localhost.localdomain/37143] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:03,587 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,587 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,587 INFO [Listener at localhost.localdomain/37143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:03,588 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,588 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:03,588 INFO [Listener at localhost.localdomain/37143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:03,590 INFO [Listener at localhost.localdomain/37143] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36713 2023-07-21 15:17:03,591 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,592 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,593 INFO [Listener at localhost.localdomain/37143] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36713 connecting to ZooKeeper ensemble=127.0.0.1:60449 2023-07-21 15:17:03,600 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:367130x0, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:03,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36713-0x101887416e20000 connected 2023-07-21 15:17:03,620 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:03,620 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:03,624 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:03,625 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36713 2023-07-21 15:17:03,632 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36713 2023-07-21 15:17:03,634 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36713 2023-07-21 15:17:03,635 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36713 2023-07-21 15:17:03,636 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36713 2023-07-21 15:17:03,638 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:03,638 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:03,638 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:03,639 INFO [Listener at localhost.localdomain/37143] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:17:03,639 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:03,639 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:03,639 INFO [Listener at localhost.localdomain/37143] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:03,639 INFO [Listener at localhost.localdomain/37143] http.HttpServer(1146): Jetty bound to port 33153 2023-07-21 15:17:03,640 INFO [Listener at localhost.localdomain/37143] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:03,649 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,650 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7aed9de9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:03,650 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,650 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c98771b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:03,770 INFO [Listener at localhost.localdomain/37143] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:03,773 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:03,774 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:03,774 INFO [Listener at localhost.localdomain/37143] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:17:03,776 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,777 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3ffcaefd{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/jetty-0_0_0_0-33153-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4654308620147352384/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:17:03,785 INFO [Listener at localhost.localdomain/37143] server.AbstractConnector(333): Started ServerConnector@183493bb{HTTP/1.1, (http/1.1)}{0.0.0.0:33153} 2023-07-21 15:17:03,786 INFO [Listener at localhost.localdomain/37143] server.Server(415): Started @39731ms 2023-07-21 15:17:03,786 INFO [Listener at localhost.localdomain/37143] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8, hbase.cluster.distributed=false 2023-07-21 15:17:03,836 INFO [Listener at localhost.localdomain/37143] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:03,837 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,837 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,837 INFO [Listener at localhost.localdomain/37143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:03,837 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:03,838 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:03,838 INFO [Listener at localhost.localdomain/37143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:03,848 INFO [Listener at localhost.localdomain/37143] ipc.NettyRpcServer(120): Bind to /136.243.18.41:45835 2023-07-21 15:17:03,850 INFO [Listener at localhost.localdomain/37143] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:03,853 DEBUG [Listener at localhost.localdomain/37143] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:03,854 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,855 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:03,857 INFO [Listener at localhost.localdomain/37143] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45835 connecting to ZooKeeper ensemble=127.0.0.1:60449 2023-07-21 15:17:03,863 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:458350x0, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:03,865 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:458350x0, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:03,866 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:458350x0, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:03,873 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45835-0x101887416e20001 connected 2023-07-21 15:17:03,874 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:03,874 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45835 2023-07-21 15:17:03,875 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45835 2023-07-21 15:17:03,875 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45835 2023-07-21 15:17:03,875 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45835 2023-07-21 15:17:03,875 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45835 2023-07-21 15:17:03,877 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:03,878 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:03,878 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:03,878 INFO [Listener at localhost.localdomain/37143] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:03,879 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:03,879 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:03,879 INFO [Listener at localhost.localdomain/37143] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:03,880 INFO [Listener at localhost.localdomain/37143] http.HttpServer(1146): Jetty bound to port 39897 2023-07-21 15:17:03,880 INFO [Listener at localhost.localdomain/37143] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:03,884 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,885 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d60927b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:03,885 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,885 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4f5a6019{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:03,983 INFO [Listener at localhost.localdomain/37143] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:03,984 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:03,985 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:03,985 INFO [Listener at localhost.localdomain/37143] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:17:03,986 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:03,987 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19bc92cf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/jetty-0_0_0_0-39897-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3710285661780887954/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:03,989 INFO [Listener at localhost.localdomain/37143] server.AbstractConnector(333): Started ServerConnector@711c3c9d{HTTP/1.1, (http/1.1)}{0.0.0.0:39897} 2023-07-21 15:17:03,989 INFO [Listener at localhost.localdomain/37143] server.Server(415): Started @39935ms 2023-07-21 15:17:04,004 INFO [Listener at localhost.localdomain/37143] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:04,004 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,005 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,005 INFO [Listener at localhost.localdomain/37143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:04,005 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,005 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:04,005 INFO [Listener at localhost.localdomain/37143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:04,006 INFO [Listener at localhost.localdomain/37143] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34385 2023-07-21 15:17:04,006 INFO [Listener at localhost.localdomain/37143] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:04,007 DEBUG [Listener at localhost.localdomain/37143] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:04,008 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:04,009 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:04,010 INFO [Listener at localhost.localdomain/37143] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34385 connecting to ZooKeeper ensemble=127.0.0.1:60449 2023-07-21 15:17:04,013 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:343850x0, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:04,015 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:343850x0, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:04,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34385-0x101887416e20002 connected 2023-07-21 15:17:04,016 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:04,016 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:04,017 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34385 2023-07-21 15:17:04,018 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34385 2023-07-21 15:17:04,018 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34385 2023-07-21 15:17:04,018 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34385 2023-07-21 15:17:04,019 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34385 2023-07-21 15:17:04,021 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:04,021 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:04,021 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:04,022 INFO [Listener at localhost.localdomain/37143] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:04,022 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:04,022 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:04,022 INFO [Listener at localhost.localdomain/37143] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:04,023 INFO [Listener at localhost.localdomain/37143] http.HttpServer(1146): Jetty bound to port 45817 2023-07-21 15:17:04,023 INFO [Listener at localhost.localdomain/37143] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:04,035 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,036 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a399fa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:04,036 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,036 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d2902b3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:04,130 INFO [Listener at localhost.localdomain/37143] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:04,131 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:04,131 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:04,131 INFO [Listener at localhost.localdomain/37143] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:17:04,132 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,133 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@316732dc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/jetty-0_0_0_0-45817-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2543646380738861613/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:04,134 INFO [Listener at localhost.localdomain/37143] server.AbstractConnector(333): Started ServerConnector@73c94476{HTTP/1.1, (http/1.1)}{0.0.0.0:45817} 2023-07-21 15:17:04,134 INFO [Listener at localhost.localdomain/37143] server.Server(415): Started @40080ms 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:04,144 INFO [Listener at localhost.localdomain/37143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:04,145 INFO [Listener at localhost.localdomain/37143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:04,145 INFO [Listener at localhost.localdomain/37143] ipc.NettyRpcServer(120): Bind to /136.243.18.41:45255 2023-07-21 15:17:04,146 INFO [Listener at localhost.localdomain/37143] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:04,147 DEBUG [Listener at localhost.localdomain/37143] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:04,148 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:04,149 INFO [Listener at localhost.localdomain/37143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:04,150 INFO [Listener at localhost.localdomain/37143] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45255 connecting to ZooKeeper ensemble=127.0.0.1:60449 2023-07-21 15:17:04,153 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:452550x0, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:04,155 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:452550x0, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:04,156 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45255-0x101887416e20003 connected 2023-07-21 15:17:04,156 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:04,157 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ZKUtil(164): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:04,157 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-21 15:17:04,157 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45255 2023-07-21 15:17:04,160 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45255 2023-07-21 15:17:04,163 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-21 15:17:04,163 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-21 15:17:04,165 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:04,166 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:04,166 INFO [Listener at localhost.localdomain/37143] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:04,166 INFO [Listener at localhost.localdomain/37143] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:04,167 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:04,167 INFO [Listener at localhost.localdomain/37143] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:04,167 INFO [Listener at localhost.localdomain/37143] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:04,167 INFO [Listener at localhost.localdomain/37143] http.HttpServer(1146): Jetty bound to port 38041 2023-07-21 15:17:04,168 INFO [Listener at localhost.localdomain/37143] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:04,172 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,172 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23466e8f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:04,173 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,173 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35195808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:04,270 INFO [Listener at localhost.localdomain/37143] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:04,271 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:04,271 INFO [Listener at localhost.localdomain/37143] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:04,271 INFO [Listener at localhost.localdomain/37143] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:17:04,272 INFO [Listener at localhost.localdomain/37143] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:04,273 INFO [Listener at localhost.localdomain/37143] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1b2884d2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/java.io.tmpdir/jetty-0_0_0_0-38041-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8550925707199051246/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:04,274 INFO [Listener at localhost.localdomain/37143] server.AbstractConnector(333): Started ServerConnector@7b29438d{HTTP/1.1, (http/1.1)}{0.0.0.0:38041} 2023-07-21 15:17:04,274 INFO [Listener at localhost.localdomain/37143] server.Server(415): Started @40220ms 2023-07-21 15:17:04,277 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:04,287 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3e053422{HTTP/1.1, (http/1.1)}{0.0.0.0:38369} 2023-07-21 15:17:04,287 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @40233ms 2023-07-21 15:17:04,287 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,288 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:17:04,288 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,289 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:04,289 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:04,289 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:04,290 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:04,290 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,291 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:17:04,292 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,36713,1689952623586 from backup master directory 2023-07-21 15:17:04,292 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:17:04,292 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,292 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:17:04,292 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:04,292 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,313 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/hbase.id with ID: c1460ba1-5449-4d67-bf87-763acddd4f5f 2023-07-21 15:17:04,324 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:04,326 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x390291c0 to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:04,338 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5081cc6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:04,338 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:04,339 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:17:04,340 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:04,342 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store-tmp 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:17:04,360 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:04,360 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:04,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/WALs/jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,363 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36713%2C1689952623586, suffix=, logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/WALs/jenkins-hbase17.apache.org,36713,1689952623586, archiveDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/oldWALs, maxLogs=10 2023-07-21 15:17:04,376 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK] 2023-07-21 15:17:04,377 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK] 2023-07-21 15:17:04,378 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK] 2023-07-21 15:17:04,380 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/WALs/jenkins-hbase17.apache.org,36713,1689952623586/jenkins-hbase17.apache.org%2C36713%2C1689952623586.1689952624363 2023-07-21 15:17:04,381 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK], DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK], DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK]] 2023-07-21 15:17:04,381 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:04,381 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:04,381 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,381 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,383 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,385 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:17:04,385 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:17:04,386 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,386 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,387 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,389 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:04,392 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:04,393 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11074373600, jitterRate=0.03138141334056854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:04,393 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:04,396 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:17:04,398 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:17:04,398 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:17:04,398 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:17:04,399 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 15:17:04,399 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 15:17:04,399 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:17:04,400 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 15:17:04,401 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:17:04,401 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 15:17:04,401 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:17:04,402 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:17:04,406 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,406 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:17:04,406 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:17:04,407 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:17:04,408 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:04,408 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:04,408 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:04,408 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,408 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:04,408 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,36713,1689952623586, sessionid=0x101887416e20000, setting cluster-up flag (Was=false) 2023-07-21 15:17:04,413 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,415 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:17:04,415 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,417 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:17:04,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:04,421 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.hbase-snapshot/.tmp 2023-07-21 15:17:04,426 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:17:04,426 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:17:04,427 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:17:04,428 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:04,428 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:17:04,428 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 15:17:04,429 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:04,444 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:17:04,444 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:17:04,445 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:17:04,445 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:04,445 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952654449 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:17:04,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:17:04,452 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,452 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:04,452 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 15:17:04,453 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:17:04,453 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:17:04,453 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:17:04,454 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952624454,5,FailOnTimeoutGroup] 2023-07-21 15:17:04,454 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952624454,5,FailOnTimeoutGroup] 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,454 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,455 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:04,478 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(951): ClusterId : c1460ba1-5449-4d67-bf87-763acddd4f5f 2023-07-21 15:17:04,480 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:04,481 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(951): ClusterId : c1460ba1-5449-4d67-bf87-763acddd4f5f 2023-07-21 15:17:04,485 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:04,485 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(951): ClusterId : c1460ba1-5449-4d67-bf87-763acddd4f5f 2023-07-21 15:17:04,486 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:04,489 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:04,489 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8 2023-07-21 15:17:04,486 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:04,490 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:04,490 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:04,491 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:04,491 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:04,491 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:04,491 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:04,491 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:04,494 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ReadOnlyZKClient(139): Connect 0x61826038 to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:04,495 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:04,497 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ReadOnlyZKClient(139): Connect 0x3bb6c09e to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:04,498 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:04,499 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ReadOnlyZKClient(139): Connect 0x1a5b5406 to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:04,509 DEBUG [RS:1;jenkins-hbase17:34385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f459278, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:04,509 DEBUG [RS:1;jenkins-hbase17:34385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b7a0b48, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:04,513 DEBUG [RS:2;jenkins-hbase17:45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@314eabee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:04,513 DEBUG [RS:0;jenkins-hbase17:45835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10a33f76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:04,514 DEBUG [RS:2;jenkins-hbase17:45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@744c23a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:04,514 DEBUG [RS:0;jenkins-hbase17:45835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55a1b306, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:04,524 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:04,524 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:34385 2023-07-21 15:17:04,524 INFO [RS:1;jenkins-hbase17:34385] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:04,524 INFO [RS:1;jenkins-hbase17:34385] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:04,524 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:04,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:17:04,525 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,36713,1689952623586 with isa=jenkins-hbase17.apache.org/136.243.18.41:34385, startcode=1689952624004 2023-07-21 15:17:04,525 DEBUG [RS:1;jenkins-hbase17:34385] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:04,526 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/info 2023-07-21 15:17:04,527 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:17:04,527 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:45835 2023-07-21 15:17:04,527 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:45255 2023-07-21 15:17:04,527 INFO [RS:0;jenkins-hbase17:45835] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:04,527 INFO [RS:0;jenkins-hbase17:45835] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:04,527 INFO [RS:2;jenkins-hbase17:45255] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:04,527 INFO [RS:2;jenkins-hbase17:45255] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:04,527 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:04,527 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:04,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:17:04,528 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,36713,1689952623586 with isa=jenkins-hbase17.apache.org/136.243.18.41:45835, startcode=1689952623836 2023-07-21 15:17:04,528 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,36713,1689952623586 with isa=jenkins-hbase17.apache.org/136.243.18.41:45255, startcode=1689952624144 2023-07-21 15:17:04,529 DEBUG [RS:0;jenkins-hbase17:45835] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:04,529 DEBUG [RS:2;jenkins-hbase17:45255] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:04,529 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:04,530 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:17:04,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:17:04,532 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/table 2023-07-21 15:17:04,532 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:17:04,532 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57327, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:04,532 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38505, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:04,532 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52931, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:04,534 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36713] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,534 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,534 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:04,535 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:17:04,535 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36713] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,535 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:04,535 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:17:04,536 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8 2023-07-21 15:17:04,536 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36713] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,536 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42415 2023-07-21 15:17:04,536 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8 2023-07-21 15:17:04,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:04,536 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33153 2023-07-21 15:17:04,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:17:04,536 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740 2023-07-21 15:17:04,536 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42415 2023-07-21 15:17:04,536 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8 2023-07-21 15:17:04,536 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33153 2023-07-21 15:17:04,536 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42415 2023-07-21 15:17:04,536 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33153 2023-07-21 15:17:04,536 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740 2023-07-21 15:17:04,537 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:04,539 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ZKUtil(162): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,539 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34385,1689952624004] 2023-07-21 15:17:04,539 WARN [RS:1;jenkins-hbase17:34385] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:04,539 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ZKUtil(162): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,539 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ZKUtil(162): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,539 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,45255,1689952624144] 2023-07-21 15:17:04,539 WARN [RS:0;jenkins-hbase17:45835] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:04,539 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,45835,1689952623836] 2023-07-21 15:17:04,540 INFO [RS:0;jenkins-hbase17:45835] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:04,539 WARN [RS:2;jenkins-hbase17:45255] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:04,539 INFO [RS:1;jenkins-hbase17:34385] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:04,540 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,540 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,540 INFO [RS:2;jenkins-hbase17:45255] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:04,540 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:17:04,540 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,544 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:17:04,546 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ZKUtil(162): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,546 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ZKUtil(162): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,546 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ZKUtil(162): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,546 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ZKUtil(162): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,546 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ZKUtil(162): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,546 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ZKUtil(162): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,546 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ZKUtil(162): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,546 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ZKUtil(162): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,547 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ZKUtil(162): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,547 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:04,547 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:04,548 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11084921920, jitterRate=0.03236380219459534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:17:04,548 INFO [RS:0;jenkins-hbase17:45835] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:04,548 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:17:04,549 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:17:04,549 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:17:04,549 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:17:04,549 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:17:04,549 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:17:04,549 DEBUG [RS:1;jenkins-hbase17:34385] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:04,549 DEBUG [RS:2;jenkins-hbase17:45255] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:04,549 INFO [RS:1;jenkins-hbase17:34385] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:04,550 INFO [RS:2;jenkins-hbase17:45255] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:04,550 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:04,550 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:17:04,550 INFO [RS:0;jenkins-hbase17:45835] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:04,551 INFO [RS:0;jenkins-hbase17:45835] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:04,551 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,551 INFO [RS:1;jenkins-hbase17:34385] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:04,551 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:04,552 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 15:17:04,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:17:04,552 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:04,552 INFO [RS:2;jenkins-hbase17:45255] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:04,557 INFO [RS:2;jenkins-hbase17:45255] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:04,557 INFO [RS:1;jenkins-hbase17:34385] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:04,557 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,557 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,557 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:04,558 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:17:04,560 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:04,560 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:04,561 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,561 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 15:17:04,562 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,562 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,563 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,562 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:0;jenkins-hbase17:45835] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,563 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,564 DEBUG [RS:1;jenkins-hbase17:34385] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,564 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,564 DEBUG [RS:2;jenkins-hbase17:45255] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:04,566 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,566 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,566 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,566 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,566 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,567 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,576 INFO [RS:1;jenkins-hbase17:34385] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:04,576 INFO [RS:0;jenkins-hbase17:45835] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:04,576 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34385,1689952624004-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,576 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45835,1689952623836-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,580 INFO [RS:2;jenkins-hbase17:45255] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:04,580 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45255,1689952624144-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,585 INFO [RS:1;jenkins-hbase17:34385] regionserver.Replication(203): jenkins-hbase17.apache.org,34385,1689952624004 started 2023-07-21 15:17:04,585 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34385,1689952624004, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34385, sessionid=0x101887416e20002 2023-07-21 15:17:04,585 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:04,585 DEBUG [RS:1;jenkins-hbase17:34385] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,585 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34385,1689952624004' 2023-07-21 15:17:04,585 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:04,586 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:04,586 INFO [RS:0;jenkins-hbase17:45835] regionserver.Replication(203): jenkins-hbase17.apache.org,45835,1689952623836 started 2023-07-21 15:17:04,586 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,45835,1689952623836, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:45835, sessionid=0x101887416e20001 2023-07-21 15:17:04,586 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:04,586 DEBUG [RS:0;jenkins-hbase17:45835] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,586 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:04,586 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45835,1689952623836' 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:04,586 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:04,587 DEBUG [RS:1;jenkins-hbase17:34385] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:04,587 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34385,1689952624004' 2023-07-21 15:17:04,587 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:04,587 DEBUG [RS:1;jenkins-hbase17:34385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45835,1689952623836' 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:04,587 DEBUG [RS:1;jenkins-hbase17:34385] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:04,587 INFO [RS:1;jenkins-hbase17:34385] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:17:04,587 DEBUG [RS:0;jenkins-hbase17:45835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:04,588 DEBUG [RS:0;jenkins-hbase17:45835] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:04,588 INFO [RS:0;jenkins-hbase17:45835] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:17:04,590 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,590 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,590 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ZKUtil(398): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:17:04,590 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ZKUtil(398): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:17:04,590 INFO [RS:1;jenkins-hbase17:34385] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:17:04,590 INFO [RS:0;jenkins-hbase17:45835] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:17:04,590 INFO [RS:2;jenkins-hbase17:45255] regionserver.Replication(203): jenkins-hbase17.apache.org,45255,1689952624144 started 2023-07-21 15:17:04,590 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,45255,1689952624144, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:45255, sessionid=0x101887416e20003 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45255,1689952624144' 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:04,591 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,591 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:04,591 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,591 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45255,1689952624144' 2023-07-21 15:17:04,591 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:04,592 DEBUG [RS:2;jenkins-hbase17:45255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:04,592 DEBUG [RS:2;jenkins-hbase17:45255] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:04,592 INFO [RS:2;jenkins-hbase17:45255] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:17:04,592 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,592 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ZKUtil(398): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:17:04,592 INFO [RS:2;jenkins-hbase17:45255] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:17:04,592 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,592 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:04,694 INFO [RS:2;jenkins-hbase17:45255] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C45255%2C1689952624144, suffix=, logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45255,1689952624144, archiveDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs, maxLogs=32 2023-07-21 15:17:04,694 INFO [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C45835%2C1689952623836, suffix=, logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836, archiveDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs, maxLogs=32 2023-07-21 15:17:04,694 INFO [RS:1;jenkins-hbase17:34385] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34385%2C1689952624004, suffix=, logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,34385,1689952624004, archiveDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs, maxLogs=32 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:17:04,713 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK] 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:04,713 DEBUG [jenkins-hbase17:36713] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:04,721 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK] 2023-07-21 15:17:04,721 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK] 2023-07-21 15:17:04,724 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,45835,1689952623836, state=OPENING 2023-07-21 15:17:04,725 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 15:17:04,726 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:04,727 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:17:04,735 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,45835,1689952623836}] 2023-07-21 15:17:04,738 WARN [ReadOnlyZKClient-127.0.0.1:60449@0x390291c0] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:17:04,738 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:04,750 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK] 2023-07-21 15:17:04,750 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK] 2023-07-21 15:17:04,750 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK] 2023-07-21 15:17:04,757 INFO [RS:2;jenkins-hbase17:45255] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45255,1689952624144/jenkins-hbase17.apache.org%2C45255%2C1689952624144.1689952624696 2023-07-21 15:17:04,761 DEBUG [RS:2;jenkins-hbase17:45255] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK], DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK], DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK]] 2023-07-21 15:17:04,774 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK] 2023-07-21 15:17:04,788 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:04,789 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45835] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:54706 deadline: 1689952684788, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,796 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK] 2023-07-21 15:17:04,796 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK] 2023-07-21 15:17:04,801 INFO [RS:1;jenkins-hbase17:34385] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,34385,1689952624004/jenkins-hbase17.apache.org%2C34385%2C1689952624004.1689952624698 2023-07-21 15:17:04,808 DEBUG [RS:1;jenkins-hbase17:34385] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK], DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK], DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK]] 2023-07-21 15:17:04,809 INFO [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836/jenkins-hbase17.apache.org%2C45835%2C1689952623836.1689952624699 2023-07-21 15:17:04,812 DEBUG [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK], DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK], DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK]] 2023-07-21 15:17:04,949 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:04,952 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:17:04,954 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:17:04,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:17:04,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:04,962 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C45835%2C1689952623836.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836, archiveDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs, maxLogs=32 2023-07-21 15:17:04,981 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK] 2023-07-21 15:17:04,981 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK] 2023-07-21 15:17:04,981 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK] 2023-07-21 15:17:04,983 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836/jenkins-hbase17.apache.org%2C45835%2C1689952623836.meta.1689952624962.meta 2023-07-21 15:17:04,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46871,DS-c87175b0-d3a0-4b32-bf7d-ed20a51022cf,DISK], DatanodeInfoWithStorage[127.0.0.1:36425,DS-9a40b671-3a88-4aa9-b542-ebb07f9a661a,DISK], DatanodeInfoWithStorage[127.0.0.1:37923,DS-683dbc9c-dfec-41fd-b2da-d2ed8487acf6,DISK]] 2023-07-21 15:17:04,983 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:17:04,984 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:17:04,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:17:04,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:17:04,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/info 2023-07-21 15:17:04,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/info 2023-07-21 15:17:04,987 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:17:04,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:17:04,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:04,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:04,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:17:04,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,989 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:17:04,990 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/table 2023-07-21 15:17:04,990 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/table 2023-07-21 15:17:04,991 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:17:04,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:04,992 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740 2023-07-21 15:17:04,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740 2023-07-21 15:17:04,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:17:04,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:17:04,997 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9843160160, jitterRate=-0.08328427374362946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:17:04,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:17:04,998 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689952624949 2023-07-21 15:17:05,002 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:17:05,003 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:17:05,003 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,45835,1689952623836, state=OPEN 2023-07-21 15:17:05,004 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:17:05,004 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:17:05,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 15:17:05,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,45835,1689952623836 in 277 msec 2023-07-21 15:17:05,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 15:17:05,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 453 msec 2023-07-21 15:17:05,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 579 msec 2023-07-21 15:17:05,008 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952625008, completionTime=-1 2023-07-21 15:17:05,008 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 15:17:05,008 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:17:05,012 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:17:05,012 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952685012 2023-07-21 15:17:05,012 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952745012 2023-07-21 15:17:05,012 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36713,1689952623586-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36713,1689952623586-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36713,1689952623586-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:36713, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 15:17:05,017 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:05,018 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 15:17:05,020 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 15:17:05,020 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:05,021 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:05,022 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,023 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598 empty. 2023-07-21 15:17:05,023 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,023 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 15:17:05,038 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:05,039 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 42e3f820ef102dd4e6ad8805f0ec2598, NAME => 'hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 42e3f820ef102dd4e6ad8805f0ec2598, disabling compactions & flushes 2023-07-21 15:17:05,050 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. after waiting 0 ms 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,050 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,050 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 42e3f820ef102dd4e6ad8805f0ec2598: 2023-07-21 15:17:05,052 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:05,053 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952625053"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952625053"}]},"ts":"1689952625053"} 2023-07-21 15:17:05,056 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:05,058 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:05,059 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625059"}]},"ts":"1689952625059"} 2023-07-21 15:17:05,060 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 15:17:05,062 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:05,062 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:05,062 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:05,062 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:05,063 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:05,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=42e3f820ef102dd4e6ad8805f0ec2598, ASSIGN}] 2023-07-21 15:17:05,065 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=42e3f820ef102dd4e6ad8805f0ec2598, ASSIGN 2023-07-21 15:17:05,066 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=42e3f820ef102dd4e6ad8805f0ec2598, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,45835,1689952623836; forceNewPlan=false, retain=false 2023-07-21 15:17:05,100 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:05,102 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:17:05,104 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:05,105 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:05,106 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,107 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14 empty. 2023-07-21 15:17:05,107 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,107 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 15:17:05,119 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:05,120 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a475c6c98bb90afa7e566484d6aefd14, NAME => 'hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp 2023-07-21 15:17:05,131 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,131 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a475c6c98bb90afa7e566484d6aefd14, disabling compactions & flushes 2023-07-21 15:17:05,132 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,132 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,132 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. after waiting 0 ms 2023-07-21 15:17:05,132 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,132 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,132 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a475c6c98bb90afa7e566484d6aefd14: 2023-07-21 15:17:05,134 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:05,135 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952625135"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952625135"}]},"ts":"1689952625135"} 2023-07-21 15:17:05,136 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:05,137 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:05,137 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625137"}]},"ts":"1689952625137"} 2023-07-21 15:17:05,138 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 15:17:05,140 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:05,140 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:05,140 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:05,140 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:05,140 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:05,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a475c6c98bb90afa7e566484d6aefd14, ASSIGN}] 2023-07-21 15:17:05,141 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a475c6c98bb90afa7e566484d6aefd14, ASSIGN 2023-07-21 15:17:05,142 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a475c6c98bb90afa7e566484d6aefd14, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,45835,1689952623836; forceNewPlan=false, retain=false 2023-07-21 15:17:05,142 INFO [jenkins-hbase17:36713] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 15:17:05,144 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=42e3f820ef102dd4e6ad8805f0ec2598, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,144 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a475c6c98bb90afa7e566484d6aefd14, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,144 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952625144"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952625144"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952625144"}]},"ts":"1689952625144"} 2023-07-21 15:17:05,144 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952625144"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952625144"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952625144"}]},"ts":"1689952625144"} 2023-07-21 15:17:05,145 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure a475c6c98bb90afa7e566484d6aefd14, server=jenkins-hbase17.apache.org,45835,1689952623836}] 2023-07-21 15:17:05,146 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=5, state=RUNNABLE; OpenRegionProcedure 42e3f820ef102dd4e6ad8805f0ec2598, server=jenkins-hbase17.apache.org,45835,1689952623836}] 2023-07-21 15:17:05,301 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,301 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a475c6c98bb90afa7e566484d6aefd14, NAME => 'hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. service=MultiRowMutationService 2023-07-21 15:17:05,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,304 INFO [StoreOpener-a475c6c98bb90afa7e566484d6aefd14-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,306 DEBUG [StoreOpener-a475c6c98bb90afa7e566484d6aefd14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/m 2023-07-21 15:17:05,306 DEBUG [StoreOpener-a475c6c98bb90afa7e566484d6aefd14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/m 2023-07-21 15:17:05,306 INFO [StoreOpener-a475c6c98bb90afa7e566484d6aefd14-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a475c6c98bb90afa7e566484d6aefd14 columnFamilyName m 2023-07-21 15:17:05,307 INFO [StoreOpener-a475c6c98bb90afa7e566484d6aefd14-1] regionserver.HStore(310): Store=a475c6c98bb90afa7e566484d6aefd14/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:05,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:05,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:05,314 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a475c6c98bb90afa7e566484d6aefd14; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4340ccbc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:05,314 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a475c6c98bb90afa7e566484d6aefd14: 2023-07-21 15:17:05,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14., pid=8, masterSystemTime=1689952625297 2023-07-21 15:17:05,318 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,318 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:05,318 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,318 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a475c6c98bb90afa7e566484d6aefd14, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,318 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 42e3f820ef102dd4e6ad8805f0ec2598, NAME => 'hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:05,318 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952625318"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952625318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952625318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952625318"}]},"ts":"1689952625318"} 2023-07-21 15:17:05,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,320 INFO [StoreOpener-42e3f820ef102dd4e6ad8805f0ec2598-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,322 DEBUG [StoreOpener-42e3f820ef102dd4e6ad8805f0ec2598-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/info 2023-07-21 15:17:05,322 DEBUG [StoreOpener-42e3f820ef102dd4e6ad8805f0ec2598-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/info 2023-07-21 15:17:05,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-21 15:17:05,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure a475c6c98bb90afa7e566484d6aefd14, server=jenkins-hbase17.apache.org,45835,1689952623836 in 175 msec 2023-07-21 15:17:05,322 INFO [StoreOpener-42e3f820ef102dd4e6ad8805f0ec2598-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 42e3f820ef102dd4e6ad8805f0ec2598 columnFamilyName info 2023-07-21 15:17:05,323 INFO [StoreOpener-42e3f820ef102dd4e6ad8805f0ec2598-1] regionserver.HStore(310): Store=42e3f820ef102dd4e6ad8805f0ec2598/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:05,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 15:17:05,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a475c6c98bb90afa7e566484d6aefd14, ASSIGN in 182 msec 2023-07-21 15:17:05,324 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:05,324 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625324"}]},"ts":"1689952625324"} 2023-07-21 15:17:05,325 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 15:17:05,327 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:05,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 226 msec 2023-07-21 15:17:05,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,331 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:05,337 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:05,337 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 42e3f820ef102dd4e6ad8805f0ec2598; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9522495520, jitterRate=-0.11314849555492401}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:05,337 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 42e3f820ef102dd4e6ad8805f0ec2598: 2023-07-21 15:17:05,338 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598., pid=9, masterSystemTime=1689952625297 2023-07-21 15:17:05,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,339 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:05,339 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=42e3f820ef102dd4e6ad8805f0ec2598, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,340 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952625339"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952625339"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952625339"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952625339"}]},"ts":"1689952625339"} 2023-07-21 15:17:05,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=5 2023-07-21 15:17:05,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=5, state=SUCCESS; OpenRegionProcedure 42e3f820ef102dd4e6ad8805f0ec2598, server=jenkins-hbase17.apache.org,45835,1689952623836 in 195 msec 2023-07-21 15:17:05,344 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 15:17:05,344 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=42e3f820ef102dd4e6ad8805f0ec2598, ASSIGN in 279 msec 2023-07-21 15:17:05,345 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:05,345 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625345"}]},"ts":"1689952625345"} 2023-07-21 15:17:05,346 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 15:17:05,348 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:05,350 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 331 msec 2023-07-21 15:17:05,406 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:17:05,406 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:17:05,410 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:05,410 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:05,411 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:17:05,412 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,36713,1689952623586] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:17:05,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 15:17:05,420 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:05,420 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:05,424 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 15:17:05,431 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:05,433 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 8 msec 2023-07-21 15:17:05,436 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:17:05,442 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:05,444 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-21 15:17:05,449 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:17:05,450 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:17:05,451 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.158sec 2023-07-21 15:17:05,451 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 15:17:05,451 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:05,452 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 15:17:05,452 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 15:17:05,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:05,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:05,455 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 15:17:05,456 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,456 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865 empty. 2023-07-21 15:17:05,457 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,457 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 15:17:05,461 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 15:17:05,461 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 15:17:05,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:05,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:17:05,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:17:05,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36713,1689952623586-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:17:05,465 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36713,1689952623586-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:17:05,465 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:17:05,469 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:05,471 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 24723d7c8eab60a8cc35cfe006213865, NAME => 'hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp 2023-07-21 15:17:05,479 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ReadOnlyZKClient(139): Connect 0x0e904348 to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:05,486 DEBUG [Listener at localhost.localdomain/37143] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ac6d6cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:05,496 DEBUG [hconnection-0x6a6e7f61-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 24723d7c8eab60a8cc35cfe006213865, disabling compactions & flushes 2023-07-21 15:17:05,497 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. after waiting 0 ms 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,497 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,497 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 24723d7c8eab60a8cc35cfe006213865: 2023-07-21 15:17:05,499 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54712, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:05,500 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:05,501 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952625501"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952625501"}]},"ts":"1689952625501"} 2023-07-21 15:17:05,502 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:05,502 INFO [Listener at localhost.localdomain/37143] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:05,503 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:05,506 DEBUG [Listener at localhost.localdomain/37143] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:17:05,506 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:05,507 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625507"}]},"ts":"1689952625507"} 2023-07-21 15:17:05,508 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:17:05,508 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 15:17:05,510 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:17:05,510 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:05,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:17:05,511 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:05,511 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:05,511 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:05,511 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:05,511 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:05,511 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=24723d7c8eab60a8cc35cfe006213865, ASSIGN}] 2023-07-21 15:17:05,511 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ReadOnlyZKClient(139): Connect 0x15e51e64 to 127.0.0.1:60449 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:05,513 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=24723d7c8eab60a8cc35cfe006213865, ASSIGN 2023-07-21 15:17:05,516 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=24723d7c8eab60a8cc35cfe006213865, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,45835,1689952623836; forceNewPlan=false, retain=false 2023-07-21 15:17:05,516 DEBUG [Listener at localhost.localdomain/37143] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ce9eece, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:05,517 INFO [Listener at localhost.localdomain/37143] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60449 2023-07-21 15:17:05,522 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:05,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101887416e2000a connected 2023-07-21 15:17:05,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 15:17:05,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 15:17:05,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 15:17:05,537 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:05,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 11 msec 2023-07-21 15:17:05,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 15:17:05,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:05,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 15:17:05,644 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:05,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-21 15:17:05,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 15:17:05,649 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:05,650 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:17:05,651 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:05,653 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:05,654 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f empty. 2023-07-21 15:17:05,654 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:05,655 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 15:17:05,666 INFO [jenkins-hbase17:36713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:17:05,667 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=24723d7c8eab60a8cc35cfe006213865, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,668 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952625667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952625667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952625667"}]},"ts":"1689952625667"} 2023-07-21 15:17:05,671 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:05,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 24723d7c8eab60a8cc35cfe006213865, server=jenkins-hbase17.apache.org,45835,1689952623836}] 2023-07-21 15:17:05,679 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => be452e0f304b0fc5be5c59deb2e9fa1f, NAME => 'np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp 2023-07-21 15:17:05,699 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,700 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing be452e0f304b0fc5be5c59deb2e9fa1f, disabling compactions & flushes 2023-07-21 15:17:05,700 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:05,700 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:05,700 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. after waiting 0 ms 2023-07-21 15:17:05,700 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:05,700 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:05,700 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for be452e0f304b0fc5be5c59deb2e9fa1f: 2023-07-21 15:17:05,703 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:05,704 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952625704"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952625704"}]},"ts":"1689952625704"} 2023-07-21 15:17:05,706 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:05,706 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:05,707 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625707"}]},"ts":"1689952625707"} 2023-07-21 15:17:05,708 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 15:17:05,710 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:05,710 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:05,710 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:05,710 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:05,710 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:05,710 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, ASSIGN}] 2023-07-21 15:17:05,711 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, ASSIGN 2023-07-21 15:17:05,712 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,45255,1689952624144; forceNewPlan=false, retain=false 2023-07-21 15:17:05,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 15:17:05,833 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24723d7c8eab60a8cc35cfe006213865, NAME => 'hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:05,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:05,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,835 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,836 DEBUG [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/q 2023-07-21 15:17:05,836 DEBUG [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/q 2023-07-21 15:17:05,836 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24723d7c8eab60a8cc35cfe006213865 columnFamilyName q 2023-07-21 15:17:05,837 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] regionserver.HStore(310): Store=24723d7c8eab60a8cc35cfe006213865/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:05,837 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,839 DEBUG [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/u 2023-07-21 15:17:05,839 DEBUG [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/u 2023-07-21 15:17:05,839 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24723d7c8eab60a8cc35cfe006213865 columnFamilyName u 2023-07-21 15:17:05,839 INFO [StoreOpener-24723d7c8eab60a8cc35cfe006213865-1] regionserver.HStore(310): Store=24723d7c8eab60a8cc35cfe006213865/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:05,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 15:17:05,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:05,846 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:05,846 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 24723d7c8eab60a8cc35cfe006213865; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11755385600, jitterRate=0.09480559825897217}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 15:17:05,847 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 24723d7c8eab60a8cc35cfe006213865: 2023-07-21 15:17:05,847 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865., pid=16, masterSystemTime=1689952625829 2023-07-21 15:17:05,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,848 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:05,849 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=24723d7c8eab60a8cc35cfe006213865, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:05,849 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952625849"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952625849"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952625849"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952625849"}]},"ts":"1689952625849"} 2023-07-21 15:17:05,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 15:17:05,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 24723d7c8eab60a8cc35cfe006213865, server=jenkins-hbase17.apache.org,45835,1689952623836 in 172 msec 2023-07-21 15:17:05,853 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 15:17:05,853 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=24723d7c8eab60a8cc35cfe006213865, ASSIGN in 340 msec 2023-07-21 15:17:05,853 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:05,854 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952625853"}]},"ts":"1689952625853"} 2023-07-21 15:17:05,855 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 15:17:05,856 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:05,857 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 405 msec 2023-07-21 15:17:05,862 INFO [jenkins-hbase17:36713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:17:05,863 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=be452e0f304b0fc5be5c59deb2e9fa1f, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:05,864 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952625863"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952625863"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952625863"}]},"ts":"1689952625863"} 2023-07-21 15:17:05,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure be452e0f304b0fc5be5c59deb2e9fa1f, server=jenkins-hbase17.apache.org,45255,1689952624144}] 2023-07-21 15:17:05,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 15:17:06,017 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:06,018 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:17:06,020 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34256, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:17:06,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be452e0f304b0fc5be5c59deb2e9fa1f, NAME => 'np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:06,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:06,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,027 INFO [StoreOpener-be452e0f304b0fc5be5c59deb2e9fa1f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,028 DEBUG [StoreOpener-be452e0f304b0fc5be5c59deb2e9fa1f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/fam1 2023-07-21 15:17:06,028 DEBUG [StoreOpener-be452e0f304b0fc5be5c59deb2e9fa1f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/fam1 2023-07-21 15:17:06,028 INFO [StoreOpener-be452e0f304b0fc5be5c59deb2e9fa1f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be452e0f304b0fc5be5c59deb2e9fa1f columnFamilyName fam1 2023-07-21 15:17:06,029 INFO [StoreOpener-be452e0f304b0fc5be5c59deb2e9fa1f-1] regionserver.HStore(310): Store=be452e0f304b0fc5be5c59deb2e9fa1f/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:06,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:06,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened be452e0f304b0fc5be5c59deb2e9fa1f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11480299200, jitterRate=0.06918618083000183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:06,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for be452e0f304b0fc5be5c59deb2e9fa1f: 2023-07-21 15:17:06,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f., pid=18, masterSystemTime=1689952626017 2023-07-21 15:17:06,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,046 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=be452e0f304b0fc5be5c59deb2e9fa1f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:06,046 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952626046"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952626046"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952626046"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952626046"}]},"ts":"1689952626046"} 2023-07-21 15:17:06,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 15:17:06,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure be452e0f304b0fc5be5c59deb2e9fa1f, server=jenkins-hbase17.apache.org,45255,1689952624144 in 183 msec 2023-07-21 15:17:06,058 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-21 15:17:06,058 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, ASSIGN in 339 msec 2023-07-21 15:17:06,059 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:06,059 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952626059"}]},"ts":"1689952626059"} 2023-07-21 15:17:06,060 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 15:17:06,062 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:06,081 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 423 msec 2023-07-21 15:17:06,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 15:17:06,253 INFO [Listener at localhost.localdomain/37143] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-21 15:17:06,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:06,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 15:17:06,258 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:06,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 15:17:06,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 15:17:06,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-21 15:17:06,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 15:17:06,363 INFO [Listener at localhost.localdomain/37143] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 15:17:06,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:06,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:06,366 INFO [Listener at localhost.localdomain/37143] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 15:17:06,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable np1:table1 2023-07-21 15:17:06,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 15:17:06,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:17:06,369 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952626369"}]},"ts":"1689952626369"} 2023-07-21 15:17:06,370 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 15:17:06,371 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 15:17:06,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, UNASSIGN}] 2023-07-21 15:17:06,372 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, UNASSIGN 2023-07-21 15:17:06,373 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=be452e0f304b0fc5be5c59deb2e9fa1f, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:06,373 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952626373"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952626373"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952626373"}]},"ts":"1689952626373"} 2023-07-21 15:17:06,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure be452e0f304b0fc5be5c59deb2e9fa1f, server=jenkins-hbase17.apache.org,45255,1689952624144}] 2023-07-21 15:17:06,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:17:06,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing be452e0f304b0fc5be5c59deb2e9fa1f, disabling compactions & flushes 2023-07-21 15:17:06,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. after waiting 0 ms 2023-07-21 15:17:06,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:06,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f. 2023-07-21 15:17:06,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for be452e0f304b0fc5be5c59deb2e9fa1f: 2023-07-21 15:17:06,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,533 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=be452e0f304b0fc5be5c59deb2e9fa1f, regionState=CLOSED 2023-07-21 15:17:06,533 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952626533"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952626533"}]},"ts":"1689952626533"} 2023-07-21 15:17:06,536 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 15:17:06,536 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure be452e0f304b0fc5be5c59deb2e9fa1f, server=jenkins-hbase17.apache.org,45255,1689952624144 in 160 msec 2023-07-21 15:17:06,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 15:17:06,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=be452e0f304b0fc5be5c59deb2e9fa1f, UNASSIGN in 164 msec 2023-07-21 15:17:06,538 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952626538"}]},"ts":"1689952626538"} 2023-07-21 15:17:06,539 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 15:17:06,540 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 15:17:06,542 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 175 msec 2023-07-21 15:17:06,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:17:06,672 INFO [Listener at localhost.localdomain/37143] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 15:17:06,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete np1:table1 2023-07-21 15:17:06,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,676 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 15:17:06,677 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:06,680 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:17:06,682 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/fam1, FileablePath, hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/recovered.edits] 2023-07-21 15:17:06,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 15:17:06,688 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/recovered.edits/4.seqid to hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/archive/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f/recovered.edits/4.seqid 2023-07-21 15:17:06,688 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/.tmp/data/np1/table1/be452e0f304b0fc5be5c59deb2e9fa1f 2023-07-21 15:17:06,688 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 15:17:06,690 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,692 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 15:17:06,694 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 15:17:06,696 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,696 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 15:17:06,696 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952626696"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:06,697 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:17:06,698 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => be452e0f304b0fc5be5c59deb2e9fa1f, NAME => 'np1:table1,,1689952625639.be452e0f304b0fc5be5c59deb2e9fa1f.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:17:06,698 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 15:17:06,698 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952626698"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:06,699 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 15:17:06,702 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 15:17:06,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-21 15:17:06,728 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 15:17:06,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 15:17:06,786 INFO [Listener at localhost.localdomain/37143] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 15:17:06,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete np1 2023-07-21 15:17:06,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,814 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,822 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,830 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,831 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 15:17:06,832 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:06,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:17:06,833 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,835 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 15:17:06,838 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 42 msec 2023-07-21 15:17:06,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:17:06,934 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 15:17:06,934 INFO [Listener at localhost.localdomain/37143] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:17:06,934 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0e904348 to 127.0.0.1:60449 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] util.JVMClusterUtil(257): Found active master hash=965215552, stopped=false 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:17:06,935 DEBUG [Listener at localhost.localdomain/37143] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 15:17:06,935 INFO [Listener at localhost.localdomain/37143] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:06,937 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:06,937 INFO [Listener at localhost.localdomain/37143] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:17:06,937 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:06,937 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:06,937 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:06,937 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:06,937 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:06,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:06,939 DEBUG [Listener at localhost.localdomain/37143] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x390291c0 to 127.0.0.1:60449 2023-07-21 15:17:06,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:06,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:06,939 DEBUG [Listener at localhost.localdomain/37143] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:06,939 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1064): Closing user regions 2023-07-21 15:17:06,940 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3305): Received CLOSE for 24723d7c8eab60a8cc35cfe006213865 2023-07-21 15:17:06,940 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,45835,1689952623836' ***** 2023-07-21 15:17:06,941 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:06,940 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34385,1689952624004' ***** 2023-07-21 15:17:06,941 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-21 15:17:06,941 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34385,1689952624004' ***** 2023-07-21 15:17:06,942 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:06,942 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,45255,1689952624144' ***** 2023-07-21 15:17:06,942 INFO [Listener at localhost.localdomain/37143] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:06,942 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:06,942 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:06,949 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3305): Received CLOSE for 42e3f820ef102dd4e6ad8805f0ec2598 2023-07-21 15:17:06,949 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3305): Received CLOSE for a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:06,956 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:06,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 24723d7c8eab60a8cc35cfe006213865, disabling compactions & flushes 2023-07-21 15:17:06,973 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:06,973 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:06,973 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:06,973 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:06,973 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:06,969 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:06,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:06,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:06,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. after waiting 0 ms 2023-07-21 15:17:06,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:06,977 INFO [RS:1;jenkins-hbase17:34385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@316732dc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:06,978 INFO [RS:1;jenkins-hbase17:34385] server.AbstractConnector(383): Stopped ServerConnector@73c94476{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:06,978 INFO [RS:1;jenkins-hbase17:34385] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:06,980 INFO [RS:2;jenkins-hbase17:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1b2884d2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:06,982 INFO [RS:2;jenkins-hbase17:45255] server.AbstractConnector(383): Stopped ServerConnector@7b29438d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:06,982 INFO [RS:2;jenkins-hbase17:45255] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:07,000 INFO [RS:0;jenkins-hbase17:45835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19bc92cf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:07,001 INFO [RS:1;jenkins-hbase17:34385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d2902b3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:07,001 INFO [RS:1;jenkins-hbase17:34385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a399fa9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:07,001 INFO [RS:2;jenkins-hbase17:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35195808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:07,001 INFO [RS:2;jenkins-hbase17:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23466e8f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:07,002 INFO [RS:0;jenkins-hbase17:45835] server.AbstractConnector(383): Stopped ServerConnector@711c3c9d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:07,002 INFO [RS:0;jenkins-hbase17:45835] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:07,002 INFO [RS:1;jenkins-hbase17:34385] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:07,002 INFO [RS:0;jenkins-hbase17:45835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4f5a6019{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:07,002 INFO [RS:0;jenkins-hbase17:45835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d60927b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:07,002 INFO [RS:1;jenkins-hbase17:34385] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:07,003 INFO [RS:1;jenkins-hbase17:34385] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:07,003 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:07,003 DEBUG [RS:1;jenkins-hbase17:34385] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x61826038 to 127.0.0.1:60449 2023-07-21 15:17:07,003 DEBUG [RS:1;jenkins-hbase17:34385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,003 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34385,1689952624004; all regions closed. 2023-07-21 15:17:07,003 DEBUG [RS:1;jenkins-hbase17:34385] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:17:07,002 INFO [RS:2;jenkins-hbase17:45255] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:07,012 INFO [RS:2;jenkins-hbase17:45255] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:07,012 INFO [RS:2;jenkins-hbase17:45255] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:07,012 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:07,012 DEBUG [RS:2;jenkins-hbase17:45255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3bb6c09e to 127.0.0.1:60449 2023-07-21 15:17:07,012 DEBUG [RS:2;jenkins-hbase17:45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,013 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,45255,1689952624144; all regions closed. 2023-07-21 15:17:07,013 DEBUG [RS:2;jenkins-hbase17:45255] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:17:07,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/quota/24723d7c8eab60a8cc35cfe006213865/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:07,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:07,033 INFO [RS:0;jenkins-hbase17:45835] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3307): Received CLOSE for the region: 42e3f820ef102dd4e6ad8805f0ec2598, which we are already trying to CLOSE, but not completed yet 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3307): Received CLOSE for the region: a475c6c98bb90afa7e566484d6aefd14, which we are already trying to CLOSE, but not completed yet 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:07,034 DEBUG [RS:0;jenkins-hbase17:45835] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a5b5406 to 127.0.0.1:60449 2023-07-21 15:17:07,034 DEBUG [RS:0;jenkins-hbase17:45835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,034 INFO [RS:0;jenkins-hbase17:45835] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:07,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 24723d7c8eab60a8cc35cfe006213865: 2023-07-21 15:17:07,037 INFO [RS:0;jenkins-hbase17:45835] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:07,037 INFO [RS:0;jenkins-hbase17:45835] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:07,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689952625451.24723d7c8eab60a8cc35cfe006213865. 2023-07-21 15:17:07,037 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:17:07,038 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-21 15:17:07,038 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 42e3f820ef102dd4e6ad8805f0ec2598=hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598., a475c6c98bb90afa7e566484d6aefd14=hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14.} 2023-07-21 15:17:07,038 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1504): Waiting on 1588230740, 42e3f820ef102dd4e6ad8805f0ec2598, a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:07,043 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:17:07,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 42e3f820ef102dd4e6ad8805f0ec2598, disabling compactions & flushes 2023-07-21 15:17:07,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:07,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:07,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. after waiting 0 ms 2023-07-21 15:17:07,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:07,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 42e3f820ef102dd4e6ad8805f0ec2598 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 15:17:07,045 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:17:07,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:17:07,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:17:07,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:17:07,046 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-21 15:17:07,058 DEBUG [RS:1;jenkins-hbase17:34385] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs 2023-07-21 15:17:07,058 INFO [RS:1;jenkins-hbase17:34385] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34385%2C1689952624004:(num 1689952624698) 2023-07-21 15:17:07,058 DEBUG [RS:1;jenkins-hbase17:34385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,058 INFO [RS:1;jenkins-hbase17:34385] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:07,058 INFO [RS:1;jenkins-hbase17:34385] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:07,058 INFO [RS:1;jenkins-hbase17:34385] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:07,058 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:07,066 DEBUG [RS:2;jenkins-hbase17:45255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs 2023-07-21 15:17:07,070 INFO [RS:2;jenkins-hbase17:45255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C45255%2C1689952624144:(num 1689952624696) 2023-07-21 15:17:07,070 DEBUG [RS:2;jenkins-hbase17:45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,070 INFO [RS:2;jenkins-hbase17:45255] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:07,071 INFO [RS:2;jenkins-hbase17:45255] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:07,071 INFO [RS:2;jenkins-hbase17:45255] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:07,071 INFO [RS:2;jenkins-hbase17:45255] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:07,071 INFO [RS:2;jenkins-hbase17:45255] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:07,058 INFO [RS:1;jenkins-hbase17:34385] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:07,071 INFO [RS:1;jenkins-hbase17:34385] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:07,071 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:07,075 INFO [RS:2;jenkins-hbase17:45255] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:45255 2023-07-21 15:17:07,076 INFO [RS:1;jenkins-hbase17:34385] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34385 2023-07-21 15:17:07,106 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:07,106 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:07,106 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45255,1689952624144 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:07,107 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34385,1689952624004 2023-07-21 15:17:07,114 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34385,1689952624004] 2023-07-21 15:17:07,114 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34385,1689952624004; numProcessing=1 2023-07-21 15:17:07,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/.tmp/info/8bd5e56094ca4f2193124e70cb3f8719 2023-07-21 15:17:07,143 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/info/debe4d6f59064a47b96dad35f384c3d2 2023-07-21 15:17:07,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8bd5e56094ca4f2193124e70cb3f8719 2023-07-21 15:17:07,154 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/.tmp/info/8bd5e56094ca4f2193124e70cb3f8719 as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/info/8bd5e56094ca4f2193124e70cb3f8719 2023-07-21 15:17:07,157 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for debe4d6f59064a47b96dad35f384c3d2 2023-07-21 15:17:07,161 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8bd5e56094ca4f2193124e70cb3f8719 2023-07-21 15:17:07,161 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/info/8bd5e56094ca4f2193124e70cb3f8719, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 15:17:07,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 42e3f820ef102dd4e6ad8805f0ec2598 in 121ms, sequenceid=8, compaction requested=false 2023-07-21 15:17:07,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 15:17:07,181 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/rep_barrier/ff938a7e2a9540e2aebc7310a5e25460 2023-07-21 15:17:07,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/namespace/42e3f820ef102dd4e6ad8805f0ec2598/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 15:17:07,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 42e3f820ef102dd4e6ad8805f0ec2598: 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952625017.42e3f820ef102dd4e6ad8805f0ec2598. 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a475c6c98bb90afa7e566484d6aefd14, disabling compactions & flushes 2023-07-21 15:17:07,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. after waiting 0 ms 2023-07-21 15:17:07,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:07,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing a475c6c98bb90afa7e566484d6aefd14 1/1 column families, dataSize=594 B heapSize=1.05 KB 2023-07-21 15:17:07,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ff938a7e2a9540e2aebc7310a5e25460 2023-07-21 15:17:07,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=594 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/.tmp/m/e0e876cb909145a8b39acd4c4d4c3842 2023-07-21 15:17:07,208 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/table/b31a95566501423688a5e1c1abac916f 2023-07-21 15:17:07,215 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,215 INFO [RS:1;jenkins-hbase17:34385] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34385,1689952624004; zookeeper connection closed. 2023-07-21 15:17:07,215 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:34385-0x101887416e20002, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,218 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@979fdf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@979fdf 2023-07-21 15:17:07,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/.tmp/m/e0e876cb909145a8b39acd4c4d4c3842 as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/m/e0e876cb909145a8b39acd4c4d4c3842 2023-07-21 15:17:07,221 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b31a95566501423688a5e1c1abac916f 2023-07-21 15:17:07,222 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/info/debe4d6f59064a47b96dad35f384c3d2 as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/info/debe4d6f59064a47b96dad35f384c3d2 2023-07-21 15:17:07,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/m/e0e876cb909145a8b39acd4c4d4c3842, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 15:17:07,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~594 B/594, heapSize ~1.04 KB/1064, currentSize=0 B/0 for a475c6c98bb90afa7e566484d6aefd14 in 45ms, sequenceid=7, compaction requested=false 2023-07-21 15:17:07,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:17:07,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for debe4d6f59064a47b96dad35f384c3d2 2023-07-21 15:17:07,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/info/debe4d6f59064a47b96dad35f384c3d2, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 15:17:07,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/rep_barrier/ff938a7e2a9540e2aebc7310a5e25460 as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/rep_barrier/ff938a7e2a9540e2aebc7310a5e25460 2023-07-21 15:17:07,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/rsgroup/a475c6c98bb90afa7e566484d6aefd14/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 15:17:07,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:07,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:07,239 DEBUG [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1504): Waiting on 1588230740, a475c6c98bb90afa7e566484d6aefd14 2023-07-21 15:17:07,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a475c6c98bb90afa7e566484d6aefd14: 2023-07-21 15:17:07,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952625100.a475c6c98bb90afa7e566484d6aefd14. 2023-07-21 15:17:07,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ff938a7e2a9540e2aebc7310a5e25460 2023-07-21 15:17:07,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/rep_barrier/ff938a7e2a9540e2aebc7310a5e25460, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 15:17:07,244 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/.tmp/table/b31a95566501423688a5e1c1abac916f as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/table/b31a95566501423688a5e1c1abac916f 2023-07-21 15:17:07,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b31a95566501423688a5e1c1abac916f 2023-07-21 15:17:07,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/table/b31a95566501423688a5e1c1abac916f, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 15:17:07,250 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 205ms, sequenceid=31, compaction requested=false 2023-07-21 15:17:07,250 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:17:07,260 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 15:17:07,261 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:07,262 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:07,262 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:17:07,262 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:07,316 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34385,1689952624004 already deleted, retry=false 2023-07-21 15:17:07,317 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,34385,1689952624004 expired; onlineServers=2 2023-07-21 15:17:07,317 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,45255,1689952624144] 2023-07-21 15:17:07,317 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,45255,1689952624144; numProcessing=2 2023-07-21 15:17:07,317 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,45255,1689952624144 already deleted, retry=false 2023-07-21 15:17:07,317 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,45255,1689952624144 expired; onlineServers=1 2023-07-21 15:17:07,339 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,339 INFO [RS:2;jenkins-hbase17:45255] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,45255,1689952624144; zookeeper connection closed. 2023-07-21 15:17:07,339 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x101887416e20003, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,341 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@465240ac] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@465240ac 2023-07-21 15:17:07,439 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,45835,1689952623836; all regions closed. 2023-07-21 15:17:07,439 DEBUG [RS:0;jenkins-hbase17:45835] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:17:07,445 DEBUG [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs 2023-07-21 15:17:07,445 INFO [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C45835%2C1689952623836.meta:.meta(num 1689952624962) 2023-07-21 15:17:07,448 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/WALs/jenkins-hbase17.apache.org,45835,1689952623836/jenkins-hbase17.apache.org%2C45835%2C1689952623836.1689952624699 not finished, retry = 0 2023-07-21 15:17:07,551 DEBUG [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/oldWALs 2023-07-21 15:17:07,551 INFO [RS:0;jenkins-hbase17:45835] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C45835%2C1689952623836:(num 1689952624699) 2023-07-21 15:17:07,551 DEBUG [RS:0;jenkins-hbase17:45835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,551 INFO [RS:0;jenkins-hbase17:45835] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:07,552 INFO [RS:0;jenkins-hbase17:45835] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:07,552 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:07,553 INFO [RS:0;jenkins-hbase17:45835] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:45835 2023-07-21 15:17:07,556 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:07,556 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45835,1689952623836 2023-07-21 15:17:07,557 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,45835,1689952623836] 2023-07-21 15:17:07,557 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,45835,1689952623836; numProcessing=3 2023-07-21 15:17:07,557 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,45835,1689952623836 already deleted, retry=false 2023-07-21 15:17:07,557 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,45835,1689952623836 expired; onlineServers=0 2023-07-21 15:17:07,557 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36713,1689952623586' ***** 2023-07-21 15:17:07,557 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:17:07,559 DEBUG [M:0;jenkins-hbase17:36713] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3db796a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:07,560 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:07,562 INFO [M:0;jenkins-hbase17:36713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3ffcaefd{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:17:07,562 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:07,562 INFO [M:0;jenkins-hbase17:36713] server.AbstractConnector(383): Stopped ServerConnector@183493bb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:07,562 INFO [M:0;jenkins-hbase17:36713] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:07,563 INFO [M:0;jenkins-hbase17:36713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c98771b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:07,563 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:07,563 INFO [M:0;jenkins-hbase17:36713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7aed9de9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:07,562 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:07,563 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36713,1689952623586 2023-07-21 15:17:07,563 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36713,1689952623586; all regions closed. 2023-07-21 15:17:07,563 DEBUG [M:0;jenkins-hbase17:36713] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:07,563 INFO [M:0;jenkins-hbase17:36713] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:17:07,564 INFO [M:0;jenkins-hbase17:36713] server.AbstractConnector(383): Stopped ServerConnector@3e053422{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:07,564 DEBUG [M:0;jenkins-hbase17:36713] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:17:07,564 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:17:07,564 DEBUG [M:0;jenkins-hbase17:36713] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:17:07,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952624454] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952624454,5,FailOnTimeoutGroup] 2023-07-21 15:17:07,565 INFO [M:0;jenkins-hbase17:36713] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:17:07,564 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952624454] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952624454,5,FailOnTimeoutGroup] 2023-07-21 15:17:07,565 INFO [M:0;jenkins-hbase17:36713] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:17:07,566 INFO [M:0;jenkins-hbase17:36713] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:07,566 DEBUG [M:0;jenkins-hbase17:36713] master.HMaster(1512): Stopping service threads 2023-07-21 15:17:07,566 INFO [M:0;jenkins-hbase17:36713] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:17:07,567 ERROR [M:0;jenkins-hbase17:36713] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 15:17:07,567 INFO [M:0;jenkins-hbase17:36713] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:17:07,567 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:17:07,567 DEBUG [M:0;jenkins-hbase17:36713] zookeeper.ZKUtil(398): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:17:07,567 WARN [M:0;jenkins-hbase17:36713] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:17:07,567 INFO [M:0;jenkins-hbase17:36713] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:17:07,568 INFO [M:0;jenkins-hbase17:36713] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:17:07,568 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:17:07,568 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:07,568 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:07,568 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:17:07,568 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:07,568 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.07 KB heapSize=109.23 KB 2023-07-21 15:17:07,580 INFO [M:0;jenkins-hbase17:36713] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.07 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c2cd47362ae74ea0a368f91b58545a6b 2023-07-21 15:17:07,587 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c2cd47362ae74ea0a368f91b58545a6b as hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c2cd47362ae74ea0a368f91b58545a6b 2023-07-21 15:17:07,595 INFO [M:0;jenkins-hbase17:36713] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42415/user/jenkins/test-data/0975863f-8e34-c7a6-0661-4fc0152bb9f8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c2cd47362ae74ea0a368f91b58545a6b, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 15:17:07,596 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegion(2948): Finished flush of dataSize ~93.07 KB/95302, heapSize ~109.21 KB/111832, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=194, compaction requested=false 2023-07-21 15:17:07,598 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:07,598 DEBUG [M:0;jenkins-hbase17:36713] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:07,603 INFO [M:0;jenkins-hbase17:36713] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:17:07,603 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:07,604 INFO [M:0;jenkins-hbase17:36713] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36713 2023-07-21 15:17:07,605 DEBUG [M:0;jenkins-hbase17:36713] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,36713,1689952623586 already deleted, retry=false 2023-07-21 15:17:07,657 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,657 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): regionserver:45835-0x101887416e20001, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,657 INFO [RS:0;jenkins-hbase17:45835] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,45835,1689952623836; zookeeper connection closed. 2023-07-21 15:17:07,657 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5d32bd90] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5d32bd90 2023-07-21 15:17:07,658 INFO [Listener at localhost.localdomain/37143] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 15:17:07,708 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,708 INFO [M:0;jenkins-hbase17:36713] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36713,1689952623586; zookeeper connection closed. 2023-07-21 15:17:07,708 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): master:36713-0x101887416e20000, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:07,709 WARN [Listener at localhost.localdomain/37143] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:07,713 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:07,818 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:07,818 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1765771880-136.243.18.41-1689952622418 (Datanode Uuid 2df4f3e5-a45e-403f-ba8d-b25f3a6da5a1) service to localhost.localdomain/127.0.0.1:42415 2023-07-21 15:17:07,818 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data5/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:07,819 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data6/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:07,821 WARN [Listener at localhost.localdomain/37143] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:07,824 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:07,936 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:07,936 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1765771880-136.243.18.41-1689952622418 (Datanode Uuid 4e7b30d2-3dda-43e6-a783-4374c20ea423) service to localhost.localdomain/127.0.0.1:42415 2023-07-21 15:17:07,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data3/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:07,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data4/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:07,940 WARN [Listener at localhost.localdomain/37143] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:07,968 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:08,074 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:08,074 WARN [BP-1765771880-136.243.18.41-1689952622418 heartbeating to localhost.localdomain/127.0.0.1:42415] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1765771880-136.243.18.41-1689952622418 (Datanode Uuid 72af5e99-7e8e-4861-8c39-43c771b99009) service to localhost.localdomain/127.0.0.1:42415 2023-07-21 15:17:08,075 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data1/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:08,075 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/cluster_a0897693-0722-2618-edc8-e93fbe0fe91c/dfs/data/data2/current/BP-1765771880-136.243.18.41-1689952622418] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:08,090 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 15:17:08,204 INFO [Listener at localhost.localdomain/37143] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.log.dir so I do NOT create it in target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f87f072a-1357-1deb-b549-7942ffc74e99/hadoop.tmp.dir so I do NOT create it in target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd, deleteOnExit=true 2023-07-21 15:17:08,253 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/test.cache.data in system properties and HBase conf 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir in system properties and HBase conf 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 15:17:08,254 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 15:17:08,254 DEBUG [Listener at localhost.localdomain/37143] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 15:17:08,255 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:17:08,255 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:17:08,255 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 15:17:08,255 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:17:08,255 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/nfs.dump.dir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir in system properties and HBase conf 2023-07-21 15:17:08,256 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:17:08,257 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 15:17:08,257 INFO [Listener at localhost.localdomain/37143] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 15:17:08,260 WARN [Listener at localhost.localdomain/37143] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:17:08,260 WARN [Listener at localhost.localdomain/37143] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:17:08,301 DEBUG [Listener at localhost.localdomain/37143-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101887416e2000a, quorum=127.0.0.1:60449, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 15:17:08,301 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101887416e2000a, quorum=127.0.0.1:60449, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 15:17:08,303 WARN [Listener at localhost.localdomain/37143] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:08,308 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:08,332 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/Jetty_localhost_localdomain_34471_hdfs____u3efaf/webapp 2023-07-21 15:17:08,466 INFO [Listener at localhost.localdomain/37143] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34471 2023-07-21 15:17:08,471 WARN [Listener at localhost.localdomain/37143] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:17:08,472 WARN [Listener at localhost.localdomain/37143] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:17:08,552 WARN [Listener at localhost.localdomain/35877] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:08,573 WARN [Listener at localhost.localdomain/35877] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:08,576 WARN [Listener at localhost.localdomain/35877] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:08,578 INFO [Listener at localhost.localdomain/35877] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:08,584 INFO [Listener at localhost.localdomain/35877] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/Jetty_localhost_38179_datanode____.bzq2tn/webapp 2023-07-21 15:17:08,659 INFO [Listener at localhost.localdomain/35877] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38179 2023-07-21 15:17:08,668 WARN [Listener at localhost.localdomain/43503] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:08,682 WARN [Listener at localhost.localdomain/43503] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:08,684 WARN [Listener at localhost.localdomain/43503] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:08,685 INFO [Listener at localhost.localdomain/43503] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:08,689 INFO [Listener at localhost.localdomain/43503] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/Jetty_localhost_35469_datanode____pgpi2k/webapp 2023-07-21 15:17:08,746 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x789275bf0b8f5a9c: Processing first storage report for DS-b329c039-e3c5-445c-abcd-5566f4a4de1f from datanode 27bc0075-28f6-490f-aa39-8668e23ce88c 2023-07-21 15:17:08,746 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x789275bf0b8f5a9c: from storage DS-b329c039-e3c5-445c-abcd-5566f4a4de1f node DatanodeRegistration(127.0.0.1:37087, datanodeUuid=27bc0075-28f6-490f-aa39-8668e23ce88c, infoPort=35103, infoSecurePort=0, ipcPort=43503, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:08,746 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x789275bf0b8f5a9c: Processing first storage report for DS-5d86df98-215d-469d-a24d-a041f0a26fce from datanode 27bc0075-28f6-490f-aa39-8668e23ce88c 2023-07-21 15:17:08,746 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x789275bf0b8f5a9c: from storage DS-5d86df98-215d-469d-a24d-a041f0a26fce node DatanodeRegistration(127.0.0.1:37087, datanodeUuid=27bc0075-28f6-490f-aa39-8668e23ce88c, infoPort=35103, infoSecurePort=0, ipcPort=43503, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:08,778 INFO [Listener at localhost.localdomain/43503] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35469 2023-07-21 15:17:08,784 WARN [Listener at localhost.localdomain/33679] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:08,799 WARN [Listener at localhost.localdomain/33679] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:17:08,801 WARN [Listener at localhost.localdomain/33679] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:17:08,802 INFO [Listener at localhost.localdomain/33679] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:17:08,806 INFO [Listener at localhost.localdomain/33679] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/Jetty_localhost_39343_datanode____clez77/webapp 2023-07-21 15:17:08,846 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1be8537f2d63dd5f: Processing first storage report for DS-250ecc10-3c83-475b-9bed-82e20a5e50cd from datanode 097a463f-6583-4c42-8916-6d4340735af1 2023-07-21 15:17:08,846 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1be8537f2d63dd5f: from storage DS-250ecc10-3c83-475b-9bed-82e20a5e50cd node DatanodeRegistration(127.0.0.1:42217, datanodeUuid=097a463f-6583-4c42-8916-6d4340735af1, infoPort=33963, infoSecurePort=0, ipcPort=33679, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:08,846 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1be8537f2d63dd5f: Processing first storage report for DS-860784a6-3a7b-402d-b5f2-bb733b820b9c from datanode 097a463f-6583-4c42-8916-6d4340735af1 2023-07-21 15:17:08,846 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1be8537f2d63dd5f: from storage DS-860784a6-3a7b-402d-b5f2-bb733b820b9c node DatanodeRegistration(127.0.0.1:42217, datanodeUuid=097a463f-6583-4c42-8916-6d4340735af1, infoPort=33963, infoSecurePort=0, ipcPort=33679, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:08,902 INFO [Listener at localhost.localdomain/33679] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39343 2023-07-21 15:17:08,914 WARN [Listener at localhost.localdomain/36325] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:17:08,991 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x848095e78acaa3e9: Processing first storage report for DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97 from datanode cb321988-c571-4203-92b4-cb68a2e2888c 2023-07-21 15:17:08,991 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x848095e78acaa3e9: from storage DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97 node DatanodeRegistration(127.0.0.1:37369, datanodeUuid=cb321988-c571-4203-92b4-cb68a2e2888c, infoPort=38035, infoSecurePort=0, ipcPort=36325, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:08,991 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x848095e78acaa3e9: Processing first storage report for DS-dc923d76-77d1-459e-885b-b220cc05ca65 from datanode cb321988-c571-4203-92b4-cb68a2e2888c 2023-07-21 15:17:08,991 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x848095e78acaa3e9: from storage DS-dc923d76-77d1-459e-885b-b220cc05ca65 node DatanodeRegistration(127.0.0.1:37369, datanodeUuid=cb321988-c571-4203-92b4-cb68a2e2888c, infoPort=38035, infoSecurePort=0, ipcPort=36325, storageInfo=lv=-57;cid=testClusterID;nsid=1214061596;c=1689952628262), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:17:09,039 DEBUG [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0 2023-07-21 15:17:09,041 INFO [Listener at localhost.localdomain/36325] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/zookeeper_0, clientPort=57770, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 15:17:09,042 INFO [Listener at localhost.localdomain/36325] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57770 2023-07-21 15:17:09,042 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,043 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,059 INFO [Listener at localhost.localdomain/36325] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea with version=8 2023-07-21 15:17:09,059 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41491/user/jenkins/test-data/cf66a909-f06d-12a4-4858-c09e195ffd58/hbase-staging 2023-07-21 15:17:09,060 DEBUG [Listener at localhost.localdomain/36325] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 15:17:09,060 DEBUG [Listener at localhost.localdomain/36325] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 15:17:09,060 DEBUG [Listener at localhost.localdomain/36325] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 15:17:09,060 DEBUG [Listener at localhost.localdomain/36325] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:09,061 INFO [Listener at localhost.localdomain/36325] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:09,062 INFO [Listener at localhost.localdomain/36325] ipc.NettyRpcServer(120): Bind to /136.243.18.41:32893 2023-07-21 15:17:09,062 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,063 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,064 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=master:32893 connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:09,069 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:328930x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:09,070 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:32893-0x10188742c4f0000 connected 2023-07-21 15:17:09,081 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:09,082 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:09,082 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:09,084 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32893 2023-07-21 15:17:09,084 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32893 2023-07-21 15:17:09,086 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32893 2023-07-21 15:17:09,087 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32893 2023-07-21 15:17:09,087 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32893 2023-07-21 15:17:09,089 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:09,089 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:09,090 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:09,090 INFO [Listener at localhost.localdomain/36325] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:17:09,090 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:09,090 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:09,090 INFO [Listener at localhost.localdomain/36325] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:09,091 INFO [Listener at localhost.localdomain/36325] http.HttpServer(1146): Jetty bound to port 33545 2023-07-21 15:17:09,091 INFO [Listener at localhost.localdomain/36325] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:09,097 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,097 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@511b45ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:09,097 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,097 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@52c70bdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:09,195 INFO [Listener at localhost.localdomain/36325] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:09,196 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:09,196 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:09,197 INFO [Listener at localhost.localdomain/36325] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:17:09,198 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,199 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7ac1c0b6{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/jetty-0_0_0_0-33545-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1244444959361026260/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:17:09,201 INFO [Listener at localhost.localdomain/36325] server.AbstractConnector(333): Started ServerConnector@26781894{HTTP/1.1, (http/1.1)}{0.0.0.0:33545} 2023-07-21 15:17:09,201 INFO [Listener at localhost.localdomain/36325] server.Server(415): Started @45147ms 2023-07-21 15:17:09,201 INFO [Listener at localhost.localdomain/36325] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea, hbase.cluster.distributed=false 2023-07-21 15:17:09,217 INFO [Listener at localhost.localdomain/36325] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:09,217 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,217 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,217 INFO [Listener at localhost.localdomain/36325] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:09,218 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,218 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:09,218 INFO [Listener at localhost.localdomain/36325] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:09,222 INFO [Listener at localhost.localdomain/36325] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38059 2023-07-21 15:17:09,223 INFO [Listener at localhost.localdomain/36325] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:09,228 DEBUG [Listener at localhost.localdomain/36325] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:09,229 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,231 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,232 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38059 connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:09,255 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:380590x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:09,255 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:380590x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:09,257 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:380590x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:09,259 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:380590x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:09,265 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38059-0x10188742c4f0001 connected 2023-07-21 15:17:09,265 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 15:17:09,266 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38059 2023-07-21 15:17:09,267 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38059 2023-07-21 15:17:09,268 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 15:17:09,269 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 15:17:09,271 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:09,271 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:09,271 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:09,272 INFO [Listener at localhost.localdomain/36325] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:09,272 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:09,272 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:09,273 INFO [Listener at localhost.localdomain/36325] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:09,273 INFO [Listener at localhost.localdomain/36325] http.HttpServer(1146): Jetty bound to port 41585 2023-07-21 15:17:09,273 INFO [Listener at localhost.localdomain/36325] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:09,275 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,275 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f8c1327{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:09,276 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,276 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55005b15{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:09,397 INFO [Listener at localhost.localdomain/36325] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:09,398 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:09,398 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:09,398 INFO [Listener at localhost.localdomain/36325] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:17:09,399 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,400 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3e25f622{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/jetty-0_0_0_0-41585-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3735451817599940568/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:09,401 INFO [Listener at localhost.localdomain/36325] server.AbstractConnector(333): Started ServerConnector@5dc82141{HTTP/1.1, (http/1.1)}{0.0.0.0:41585} 2023-07-21 15:17:09,401 INFO [Listener at localhost.localdomain/36325] server.Server(415): Started @45347ms 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:09,412 INFO [Listener at localhost.localdomain/36325] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:09,413 INFO [Listener at localhost.localdomain/36325] ipc.NettyRpcServer(120): Bind to /136.243.18.41:44393 2023-07-21 15:17:09,413 INFO [Listener at localhost.localdomain/36325] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:09,417 DEBUG [Listener at localhost.localdomain/36325] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:09,418 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,420 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,421 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44393 connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:09,424 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:443930x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:09,426 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:443930x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:09,427 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44393-0x10188742c4f0002 connected 2023-07-21 15:17:09,427 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:09,428 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:09,432 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44393 2023-07-21 15:17:09,433 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44393 2023-07-21 15:17:09,433 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44393 2023-07-21 15:17:09,436 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44393 2023-07-21 15:17:09,437 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44393 2023-07-21 15:17:09,439 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:09,440 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:09,440 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:09,440 INFO [Listener at localhost.localdomain/36325] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:09,440 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:09,441 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:09,441 INFO [Listener at localhost.localdomain/36325] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:09,441 INFO [Listener at localhost.localdomain/36325] http.HttpServer(1146): Jetty bound to port 33251 2023-07-21 15:17:09,442 INFO [Listener at localhost.localdomain/36325] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:09,447 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,447 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f111747{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:09,448 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,448 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@dff6508{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:09,553 INFO [Listener at localhost.localdomain/36325] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:09,554 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:09,554 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:09,555 INFO [Listener at localhost.localdomain/36325] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:17:09,555 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,556 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19c62d88{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/jetty-0_0_0_0-33251-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1118231532903943661/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:09,559 INFO [Listener at localhost.localdomain/36325] server.AbstractConnector(333): Started ServerConnector@116f5b6e{HTTP/1.1, (http/1.1)}{0.0.0.0:33251} 2023-07-21 15:17:09,559 INFO [Listener at localhost.localdomain/36325] server.Server(415): Started @45504ms 2023-07-21 15:17:09,569 INFO [Listener at localhost.localdomain/36325] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:09,569 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,570 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,570 INFO [Listener at localhost.localdomain/36325] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:09,570 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:09,570 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:09,570 INFO [Listener at localhost.localdomain/36325] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:09,571 INFO [Listener at localhost.localdomain/36325] ipc.NettyRpcServer(120): Bind to /136.243.18.41:42481 2023-07-21 15:17:09,571 INFO [Listener at localhost.localdomain/36325] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:09,572 DEBUG [Listener at localhost.localdomain/36325] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:09,572 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,573 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,574 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42481 connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:09,577 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:424810x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:09,578 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:424810x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:09,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42481-0x10188742c4f0003 connected 2023-07-21 15:17:09,579 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:09,580 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:09,580 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-21 15:17:09,580 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42481 2023-07-21 15:17:09,583 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42481 2023-07-21 15:17:09,583 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-21 15:17:09,583 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-21 15:17:09,585 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:09,585 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:09,585 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] http.HttpServer(1146): Jetty bound to port 37999 2023-07-21 15:17:09,586 INFO [Listener at localhost.localdomain/36325] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:09,589 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,589 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28bc3494{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:09,589 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,589 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e2bab9d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:09,690 INFO [Listener at localhost.localdomain/36325] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:09,691 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:09,691 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:09,691 INFO [Listener at localhost.localdomain/36325] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:17:09,692 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:09,693 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6d88d7ed{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/jetty-0_0_0_0-37999-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2468964340370043235/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:09,694 INFO [Listener at localhost.localdomain/36325] server.AbstractConnector(333): Started ServerConnector@e8ecf8b{HTTP/1.1, (http/1.1)}{0.0.0.0:37999} 2023-07-21 15:17:09,694 INFO [Listener at localhost.localdomain/36325] server.Server(415): Started @45640ms 2023-07-21 15:17:09,696 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:09,700 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@8e62974{HTTP/1.1, (http/1.1)}{0.0.0.0:35533} 2023-07-21 15:17:09,700 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @45646ms 2023-07-21 15:17:09,700 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,701 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:17:09,701 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,702 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:09,702 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:09,702 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,702 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:09,702 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:09,703 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:17:09,706 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,32893,1689952629060 from backup master directory 2023-07-21 15:17:09,706 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:17:09,706 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,706 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:09,706 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:17:09,706 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,725 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/hbase.id with ID: 17c57e0a-bd67-4811-977b-8bee167bbadf 2023-07-21 15:17:09,737 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:09,739 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,752 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1d7973b4 to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:09,755 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@529daddd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:09,755 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:09,756 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:17:09,756 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:09,758 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store-tmp 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:17:09,789 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:09,789 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:09,789 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:09,790 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/WALs/jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,798 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C32893%2C1689952629060, suffix=, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/WALs/jenkins-hbase17.apache.org,32893,1689952629060, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/oldWALs, maxLogs=10 2023-07-21 15:17:09,825 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:09,826 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:09,825 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:09,832 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/WALs/jenkins-hbase17.apache.org,32893,1689952629060/jenkins-hbase17.apache.org%2C32893%2C1689952629060.1689952629799 2023-07-21 15:17:09,833 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK], DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK]] 2023-07-21 15:17:09,833 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:09,833 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:09,833 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,833 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,835 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,837 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:17:09,837 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:17:09,838 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:09,838 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,839 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,841 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:17:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:09,843 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11843515360, jitterRate=0.10301332175731659}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:09,843 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:09,843 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:17:09,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:17:09,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:17:09,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:17:09,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 15:17:09,845 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 15:17:09,845 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:17:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 15:17:09,846 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:17:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 15:17:09,847 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:17:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:17:09,849 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,849 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:17:09,850 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:17:09,850 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:17:09,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:09,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:09,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:09,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:09,851 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,32893,1689952629060, sessionid=0x10188742c4f0000, setting cluster-up flag (Was=false) 2023-07-21 15:17:09,854 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,857 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:17:09,857 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,859 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:09,861 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:17:09,862 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:09,863 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.hbase-snapshot/.tmp 2023-07-21 15:17:09,869 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:17:09,869 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:17:09,870 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:09,870 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:17:09,870 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:17:09,871 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:09,885 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:17:09,886 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:17:09,886 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:17:09,886 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:09,886 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952659888 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:17:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:09,889 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:09,890 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 15:17:09,891 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:09,893 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:17:09,893 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:17:09,893 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:17:09,901 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(951): ClusterId : 17c57e0a-bd67-4811-977b-8bee167bbadf 2023-07-21 15:17:09,906 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(951): ClusterId : 17c57e0a-bd67-4811-977b-8bee167bbadf 2023-07-21 15:17:09,906 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(951): ClusterId : 17c57e0a-bd67-4811-977b-8bee167bbadf 2023-07-21 15:17:09,910 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:17:09,910 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:09,910 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:09,910 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:17:09,911 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:09,912 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:09,912 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:09,912 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:09,912 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:09,912 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:09,912 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:09,913 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:09,914 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:09,914 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:09,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952629910,5,FailOnTimeoutGroup] 2023-07-21 15:17:09,921 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952629920,5,FailOnTimeoutGroup] 2023-07-21 15:17:09,923 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:09,923 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ReadOnlyZKClient(139): Connect 0x7851c3bf to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:09,925 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ReadOnlyZKClient(139): Connect 0x6aff5dfe to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:09,928 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:17:09,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:09,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:09,933 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ReadOnlyZKClient(139): Connect 0x7a97e446 to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:09,960 DEBUG [RS:1;jenkins-hbase17:44393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@629aa8a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:09,960 DEBUG [RS:1;jenkins-hbase17:44393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5bdfb748, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:09,960 DEBUG [RS:0;jenkins-hbase17:38059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@769632c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:09,961 DEBUG [RS:0;jenkins-hbase17:38059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72a9a8cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:09,965 DEBUG [RS:2;jenkins-hbase17:42481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ea86fcc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:09,966 DEBUG [RS:2;jenkins-hbase17:42481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3baaee7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:09,967 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:09,968 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:09,968 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea 2023-07-21 15:17:09,971 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:44393 2023-07-21 15:17:09,971 INFO [RS:1;jenkins-hbase17:44393] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:09,971 INFO [RS:1;jenkins-hbase17:44393] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:09,971 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:09,971 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:38059 2023-07-21 15:17:09,972 INFO [RS:0;jenkins-hbase17:38059] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:09,972 INFO [RS:0;jenkins-hbase17:38059] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:09,972 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,32893,1689952629060 with isa=jenkins-hbase17.apache.org/136.243.18.41:44393, startcode=1689952629411 2023-07-21 15:17:09,972 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:09,972 DEBUG [RS:1;jenkins-hbase17:44393] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:09,972 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,32893,1689952629060 with isa=jenkins-hbase17.apache.org/136.243.18.41:38059, startcode=1689952629216 2023-07-21 15:17:09,972 DEBUG [RS:0;jenkins-hbase17:38059] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:09,976 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:42481 2023-07-21 15:17:09,976 INFO [RS:2;jenkins-hbase17:42481] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:09,976 INFO [RS:2;jenkins-hbase17:42481] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:09,976 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:09,977 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,32893,1689952629060 with isa=jenkins-hbase17.apache.org/136.243.18.41:42481, startcode=1689952629569 2023-07-21 15:17:09,977 DEBUG [RS:2;jenkins-hbase17:42481] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:09,977 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:46663, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:09,981 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:09,982 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:09,985 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea 2023-07-21 15:17:09,985 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35877 2023-07-21 15:17:09,985 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35053, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:09,985 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33545 2023-07-21 15:17:09,986 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57799, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:09,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:17:09,986 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:09,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:09,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:17:09,986 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:09,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:09,986 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:17:09,986 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea 2023-07-21 15:17:09,986 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35877 2023-07-21 15:17:09,986 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33545 2023-07-21 15:17:09,987 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea 2023-07-21 15:17:09,987 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35877 2023-07-21 15:17:09,987 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33545 2023-07-21 15:17:09,987 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:09,989 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:09,990 WARN [RS:1;jenkins-hbase17:44393] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:09,990 INFO [RS:1;jenkins-hbase17:44393] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:09,990 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:09,990 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,44393,1689952629411] 2023-07-21 15:17:09,990 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:09,990 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38059,1689952629216] 2023-07-21 15:17:09,990 WARN [RS:0;jenkins-hbase17:38059] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:09,990 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,42481,1689952629569] 2023-07-21 15:17:09,990 INFO [RS:0;jenkins-hbase17:38059] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:09,990 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:09,990 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:09,990 WARN [RS:2;jenkins-hbase17:42481] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:09,990 INFO [RS:2;jenkins-hbase17:42481] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:09,990 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,002 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:10,003 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:10,003 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:10,003 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:17:10,003 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,004 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,004 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,004 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,004 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:10,005 DEBUG [RS:1;jenkins-hbase17:44393] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:10,005 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:10,005 INFO [RS:1;jenkins-hbase17:44393] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:10,005 INFO [RS:0;jenkins-hbase17:38059] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:10,005 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,005 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/info 2023-07-21 15:17:10,005 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,005 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:17:10,006 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:10,006 INFO [RS:1;jenkins-hbase17:44393] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:10,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,006 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:17:10,007 INFO [RS:2;jenkins-hbase17:42481] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:10,008 INFO [RS:1;jenkins-hbase17:44393] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:10,008 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,008 INFO [RS:0;jenkins-hbase17:38059] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:10,008 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:10,009 INFO [RS:2;jenkins-hbase17:42481] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:10,009 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:17:10,010 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,010 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:17:10,011 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:10,011 INFO [RS:0;jenkins-hbase17:38059] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:10,011 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,011 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/table 2023-07-21 15:17:10,011 INFO [RS:2;jenkins-hbase17:42481] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:10,012 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:10,013 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,013 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:10,013 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:17:10,014 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:10,015 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,015 DEBUG [RS:1;jenkins-hbase17:44393] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,017 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:10,017 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,017 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,017 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,018 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,018 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:10,018 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,018 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740 2023-07-21 15:17:10,019 DEBUG [RS:2;jenkins-hbase17:42481] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,019 DEBUG [RS:0;jenkins-hbase17:38059] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:10,021 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:17:10,022 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:17:10,025 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,025 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,025 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,028 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,029 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:10,029 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,029 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,030 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10038189600, jitterRate=-0.06512074172496796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:17:10,030 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:17:10,030 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:17:10,030 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:17:10,030 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:17:10,030 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:17:10,030 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:17:10,032 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:10,032 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:17:10,033 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:17:10,033 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 15:17:10,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:17:10,034 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:17:10,036 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 15:17:10,038 INFO [RS:1;jenkins-hbase17:44393] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:10,038 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,44393,1689952629411-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,041 INFO [RS:0;jenkins-hbase17:38059] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:10,041 INFO [RS:2;jenkins-hbase17:42481] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:10,042 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38059,1689952629216-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,042 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42481,1689952629569-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,056 INFO [RS:1;jenkins-hbase17:44393] regionserver.Replication(203): jenkins-hbase17.apache.org,44393,1689952629411 started 2023-07-21 15:17:10,056 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,44393,1689952629411, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:44393, sessionid=0x10188742c4f0002 2023-07-21 15:17:10,056 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:10,056 DEBUG [RS:1;jenkins-hbase17:44393] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:10,056 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44393,1689952629411' 2023-07-21 15:17:10,056 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44393,1689952629411' 2023-07-21 15:17:10,057 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:10,058 DEBUG [RS:1;jenkins-hbase17:44393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:10,058 DEBUG [RS:1;jenkins-hbase17:44393] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:10,058 INFO [RS:1;jenkins-hbase17:44393] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:17:10,058 INFO [RS:1;jenkins-hbase17:44393] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:17:10,059 INFO [RS:2;jenkins-hbase17:42481] regionserver.Replication(203): jenkins-hbase17.apache.org,42481,1689952629569 started 2023-07-21 15:17:10,059 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,42481,1689952629569, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:42481, sessionid=0x10188742c4f0003 2023-07-21 15:17:10,059 INFO [RS:0;jenkins-hbase17:38059] regionserver.Replication(203): jenkins-hbase17.apache.org,38059,1689952629216 started 2023-07-21 15:17:10,059 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:10,059 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38059,1689952629216, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38059, sessionid=0x10188742c4f0001 2023-07-21 15:17:10,059 DEBUG [RS:2;jenkins-hbase17:42481] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,059 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:10,059 DEBUG [RS:0;jenkins-hbase17:38059] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,059 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38059,1689952629216' 2023-07-21 15:17:10,059 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,42481,1689952629569' 2023-07-21 15:17:10,059 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:10,059 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38059,1689952629216' 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,42481,1689952629569' 2023-07-21 15:17:10,060 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:10,060 DEBUG [RS:0;jenkins-hbase17:38059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:10,061 DEBUG [RS:2;jenkins-hbase17:42481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:10,061 DEBUG [RS:0;jenkins-hbase17:38059] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:10,061 INFO [RS:0;jenkins-hbase17:38059] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:17:10,061 INFO [RS:0;jenkins-hbase17:38059] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:17:10,061 DEBUG [RS:2;jenkins-hbase17:42481] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:10,061 INFO [RS:2;jenkins-hbase17:42481] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:17:10,061 INFO [RS:2;jenkins-hbase17:42481] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:17:10,160 INFO [RS:1;jenkins-hbase17:44393] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C44393%2C1689952629411, suffix=, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,44393,1689952629411, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs, maxLogs=32 2023-07-21 15:17:10,164 INFO [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C42481%2C1689952629569, suffix=, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,42481,1689952629569, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs, maxLogs=32 2023-07-21 15:17:10,164 INFO [RS:0;jenkins-hbase17:38059] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38059%2C1689952629216, suffix=, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,38059,1689952629216, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs, maxLogs=32 2023-07-21 15:17:10,186 DEBUG [jenkins-hbase17:32893] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:17:10,186 DEBUG [jenkins-hbase17:32893] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:10,187 DEBUG [jenkins-hbase17:32893] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:10,187 DEBUG [jenkins-hbase17:32893] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:10,187 DEBUG [jenkins-hbase17:32893] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:10,187 DEBUG [jenkins-hbase17:32893] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:10,194 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,42481,1689952629569, state=OPENING 2023-07-21 15:17:10,195 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 15:17:10,195 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:10,196 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,42481,1689952629569}] 2023-07-21 15:17:10,196 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:17:10,228 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:10,229 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:10,229 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:10,238 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:10,239 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:10,239 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:10,288 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:10,288 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:10,288 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:10,292 INFO [RS:0;jenkins-hbase17:38059] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,38059,1689952629216/jenkins-hbase17.apache.org%2C38059%2C1689952629216.1689952630165 2023-07-21 15:17:10,292 INFO [RS:1;jenkins-hbase17:44393] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,44393,1689952629411/jenkins-hbase17.apache.org%2C44393%2C1689952629411.1689952630160 2023-07-21 15:17:10,296 DEBUG [RS:0;jenkins-hbase17:38059] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK], DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK], DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK]] 2023-07-21 15:17:10,297 DEBUG [RS:1;jenkins-hbase17:44393] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK], DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK]] 2023-07-21 15:17:10,297 INFO [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,42481,1689952629569/jenkins-hbase17.apache.org%2C42481%2C1689952629569.1689952630165 2023-07-21 15:17:10,297 DEBUG [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK], DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK]] 2023-07-21 15:17:10,444 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,445 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:17:10,446 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:17:10,449 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:17:10,449 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:10,451 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C42481%2C1689952629569.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,42481,1689952629569, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs, maxLogs=32 2023-07-21 15:17:10,465 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:10,465 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:10,465 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:10,467 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,42481,1689952629569/jenkins-hbase17.apache.org%2C42481%2C1689952629569.meta.1689952630452.meta 2023-07-21 15:17:10,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK], DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK]] 2023-07-21 15:17:10,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:10,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:17:10,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:17:10,467 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:17:10,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:17:10,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:10,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:17:10,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:17:10,469 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:17:10,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/info 2023-07-21 15:17:10,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/info 2023-07-21 15:17:10,471 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:17:10,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:17:10,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:10,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:17:10,473 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:17:10,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:17:10,474 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/table 2023-07-21 15:17:10,474 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/table 2023-07-21 15:17:10,475 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:17:10,475 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740 2023-07-21 15:17:10,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740 2023-07-21 15:17:10,479 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:17:10,480 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:17:10,480 WARN [ReadOnlyZKClient-127.0.0.1:57770@0x1d7973b4] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:17:10,482 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:10,482 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11687843520, jitterRate=0.0885152518749237}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:17:10,482 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:17:10,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689952630444 2023-07-21 15:17:10,486 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:10,487 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42481] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:44326 deadline: 1689952690486, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:10,489 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:17:10,490 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:17:10,490 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,42481,1689952629569, state=OPEN 2023-07-21 15:17:10,491 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:17:10,491 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:17:10,493 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 15:17:10,493 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,42481,1689952629569 in 295 msec 2023-07-21 15:17:10,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 15:17:10,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 460 msec 2023-07-21 15:17:10,495 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 623 msec 2023-07-21 15:17:10,495 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952630495, completionTime=-1 2023-07-21 15:17:10,495 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 15:17:10,496 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:17:10,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:17:10,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952690501 2023-07-21 15:17:10,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952750501 2023-07-21 15:17:10,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,32893,1689952629060-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,32893,1689952629060-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,32893,1689952629060-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:32893, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:10,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 15:17:10,507 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:10,507 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 15:17:10,510 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:10,510 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 15:17:10,511 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:10,512 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/namespace/2662492254ae1355d69b669151350966 2023-07-21 15:17:10,513 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/namespace/2662492254ae1355d69b669151350966 empty. 2023-07-21 15:17:10,514 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/namespace/2662492254ae1355d69b669151350966 2023-07-21 15:17:10,514 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 15:17:10,528 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:10,529 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2662492254ae1355d69b669151350966, NAME => 'hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2662492254ae1355d69b669151350966, disabling compactions & flushes 2023-07-21 15:17:10,537 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. after waiting 0 ms 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,537 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2662492254ae1355d69b669151350966: 2023-07-21 15:17:10,539 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:10,540 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952630540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952630540"}]},"ts":"1689952630540"} 2023-07-21 15:17:10,543 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:10,544 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:10,544 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952630544"}]},"ts":"1689952630544"} 2023-07-21 15:17:10,545 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 15:17:10,547 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:10,547 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:10,547 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:10,547 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:10,547 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:10,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2662492254ae1355d69b669151350966, ASSIGN}] 2023-07-21 15:17:10,549 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2662492254ae1355d69b669151350966, ASSIGN 2023-07-21 15:17:10,550 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2662492254ae1355d69b669151350966, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38059,1689952629216; forceNewPlan=false, retain=false 2023-07-21 15:17:10,701 INFO [jenkins-hbase17:32893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:17:10,702 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2662492254ae1355d69b669151350966, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,702 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952630702"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952630702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952630702"}]},"ts":"1689952630702"} 2023-07-21 15:17:10,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 2662492254ae1355d69b669151350966, server=jenkins-hbase17.apache.org,38059,1689952629216}] 2023-07-21 15:17:10,856 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,856 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:17:10,858 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:17:10,862 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2662492254ae1355d69b669151350966, NAME => 'hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2662492254ae1355d69b669151350966 2023-07-21 15:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 2662492254ae1355d69b669151350966 2023-07-21 15:17:10,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 2662492254ae1355d69b669151350966 2023-07-21 15:17:10,864 INFO [StoreOpener-2662492254ae1355d69b669151350966-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2662492254ae1355d69b669151350966 2023-07-21 15:17:10,865 DEBUG [StoreOpener-2662492254ae1355d69b669151350966-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/info 2023-07-21 15:17:10,865 DEBUG [StoreOpener-2662492254ae1355d69b669151350966-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/info 2023-07-21 15:17:10,865 INFO [StoreOpener-2662492254ae1355d69b669151350966-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2662492254ae1355d69b669151350966 columnFamilyName info 2023-07-21 15:17:10,866 INFO [StoreOpener-2662492254ae1355d69b669151350966-1] regionserver.HStore(310): Store=2662492254ae1355d69b669151350966/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:10,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966 2023-07-21 15:17:10,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966 2023-07-21 15:17:10,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 2662492254ae1355d69b669151350966 2023-07-21 15:17:10,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:10,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 2662492254ae1355d69b669151350966; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11123632960, jitterRate=0.035969048738479614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:10,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 2662492254ae1355d69b669151350966: 2023-07-21 15:17:10,873 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966., pid=6, masterSystemTime=1689952630856 2023-07-21 15:17:10,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:10,879 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2662492254ae1355d69b669151350966, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:10,879 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952630879"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952630879"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952630879"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952630879"}]},"ts":"1689952630879"} 2023-07-21 15:17:10,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 15:17:10,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 2662492254ae1355d69b669151350966, server=jenkins-hbase17.apache.org,38059,1689952629216 in 177 msec 2023-07-21 15:17:10,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 15:17:10,883 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2662492254ae1355d69b669151350966, ASSIGN in 334 msec 2023-07-21 15:17:10,883 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:10,883 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952630883"}]},"ts":"1689952630883"} 2023-07-21 15:17:10,884 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 15:17:10,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:10,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 379 msec 2023-07-21 15:17:10,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 15:17:10,909 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:10,910 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:10,912 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:10,913 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:10,916 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 15:17:10,932 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:10,934 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 18 msec 2023-07-21 15:17:10,937 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:17:10,944 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:10,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-21 15:17:10,953 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:17:10,954 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:17:10,954 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.248sec 2023-07-21 15:17:10,954 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 15:17:10,955 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:17:10,955 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:17:10,955 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,32893,1689952629060-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:17:10,955 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,32893,1689952629060-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:17:10,955 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:17:10,990 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:10,992 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:17:10,994 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:10,995 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:10,997 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:10,997 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd empty. 2023-07-21 15:17:10,998 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:10,998 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 15:17:11,015 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ReadOnlyZKClient(139): Connect 0x3fb06810 to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:11,028 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:11,028 DEBUG [Listener at localhost.localdomain/36325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d11d20a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:11,029 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b0c31c07f229872023897d5df93447bd, NAME => 'hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp 2023-07-21 15:17:11,029 DEBUG [hconnection-0x68d4ffa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:11,031 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:11,033 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:11,033 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing b0c31c07f229872023897d5df93447bd, disabling compactions & flushes 2023-07-21 15:17:11,043 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. after waiting 0 ms 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,043 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,043 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for b0c31c07f229872023897d5df93447bd: 2023-07-21 15:17:11,045 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:11,045 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952631045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952631045"}]},"ts":"1689952631045"} 2023-07-21 15:17:11,047 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:11,048 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:11,048 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952631048"}]},"ts":"1689952631048"} 2023-07-21 15:17:11,050 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 15:17:11,051 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:11,052 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:11,052 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:11,052 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:11,052 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:11,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b0c31c07f229872023897d5df93447bd, ASSIGN}] 2023-07-21 15:17:11,058 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b0c31c07f229872023897d5df93447bd, ASSIGN 2023-07-21 15:17:11,059 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b0c31c07f229872023897d5df93447bd, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,42481,1689952629569; forceNewPlan=false, retain=false 2023-07-21 15:17:11,209 INFO [jenkins-hbase17:32893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:17:11,211 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b0c31c07f229872023897d5df93447bd, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,211 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952631211"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952631211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952631211"}]},"ts":"1689952631211"} 2023-07-21 15:17:11,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure b0c31c07f229872023897d5df93447bd, server=jenkins-hbase17.apache.org,42481,1689952629569}] 2023-07-21 15:17:11,368 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b0c31c07f229872023897d5df93447bd, NAME => 'hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. service=MultiRowMutationService 2023-07-21 15:17:11,369 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,372 INFO [StoreOpener-b0c31c07f229872023897d5df93447bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,375 DEBUG [StoreOpener-b0c31c07f229872023897d5df93447bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/m 2023-07-21 15:17:11,375 DEBUG [StoreOpener-b0c31c07f229872023897d5df93447bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/m 2023-07-21 15:17:11,376 INFO [StoreOpener-b0c31c07f229872023897d5df93447bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b0c31c07f229872023897d5df93447bd columnFamilyName m 2023-07-21 15:17:11,382 INFO [StoreOpener-b0c31c07f229872023897d5df93447bd-1] regionserver.HStore(310): Store=b0c31c07f229872023897d5df93447bd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:11,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,396 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:11,401 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:11,402 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b0c31c07f229872023897d5df93447bd; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2603e3c6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:11,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b0c31c07f229872023897d5df93447bd: 2023-07-21 15:17:11,403 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd., pid=11, masterSystemTime=1689952631365 2023-07-21 15:17:11,405 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,405 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:11,407 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b0c31c07f229872023897d5df93447bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,408 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952631407"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952631407"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952631407"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952631407"}]},"ts":"1689952631407"} 2023-07-21 15:17:11,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-21 15:17:11,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure b0c31c07f229872023897d5df93447bd, server=jenkins-hbase17.apache.org,42481,1689952629569 in 196 msec 2023-07-21 15:17:11,415 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-21 15:17:11,415 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b0c31c07f229872023897d5df93447bd, ASSIGN in 361 msec 2023-07-21 15:17:11,419 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:11,419 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952631419"}]},"ts":"1689952631419"} 2023-07-21 15:17:11,421 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 15:17:11,423 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:11,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 433 msec 2023-07-21 15:17:11,495 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:17:11,495 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:17:11,498 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:11,498 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,499 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:17:11,500 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:17:11,538 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:17:11,540 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:17:11,542 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:17:11,543 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:11,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:17:11,544 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ReadOnlyZKClient(139): Connect 0x3e16adec to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:11,549 DEBUG [Listener at localhost.localdomain/36325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5288c74b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:11,549 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:11,553 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:11,554 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10188742c4f000a connected 2023-07-21 15:17:11,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,566 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:17:11,583 INFO [Listener at localhost.localdomain/36325] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:17:11,583 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:11,583 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:11,583 INFO [Listener at localhost.localdomain/36325] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:17:11,584 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:17:11,584 INFO [Listener at localhost.localdomain/36325] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:17:11,584 INFO [Listener at localhost.localdomain/36325] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:17:11,584 INFO [Listener at localhost.localdomain/36325] ipc.NettyRpcServer(120): Bind to /136.243.18.41:45125 2023-07-21 15:17:11,585 INFO [Listener at localhost.localdomain/36325] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:17:11,586 DEBUG [Listener at localhost.localdomain/36325] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:17:11,587 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:11,588 INFO [Listener at localhost.localdomain/36325] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:17:11,589 INFO [Listener at localhost.localdomain/36325] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45125 connecting to ZooKeeper ensemble=127.0.0.1:57770 2023-07-21 15:17:11,599 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:451250x0, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:17:11,601 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(162): regionserver:451250x0, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:17:11,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45125-0x10188742c4f000b connected 2023-07-21 15:17:11,603 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:17:11,603 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ZKUtil(164): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:17:11,604 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45125 2023-07-21 15:17:11,604 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45125 2023-07-21 15:17:11,604 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45125 2023-07-21 15:17:11,604 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45125 2023-07-21 15:17:11,605 DEBUG [Listener at localhost.localdomain/36325] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45125 2023-07-21 15:17:11,606 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:17:11,606 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:17:11,606 INFO [Listener at localhost.localdomain/36325] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:17:11,607 INFO [Listener at localhost.localdomain/36325] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:17:11,607 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:17:11,607 INFO [Listener at localhost.localdomain/36325] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:17:11,607 INFO [Listener at localhost.localdomain/36325] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:17:11,608 INFO [Listener at localhost.localdomain/36325] http.HttpServer(1146): Jetty bound to port 44967 2023-07-21 15:17:11,608 INFO [Listener at localhost.localdomain/36325] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:17:11,613 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:11,613 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@62566cb6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:17:11,614 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:11,614 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ff23c7c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:17:11,712 INFO [Listener at localhost.localdomain/36325] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:17:11,713 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:17:11,713 INFO [Listener at localhost.localdomain/36325] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:17:11,713 INFO [Listener at localhost.localdomain/36325] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:17:11,714 INFO [Listener at localhost.localdomain/36325] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:17:11,715 INFO [Listener at localhost.localdomain/36325] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36d7d462{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/java.io.tmpdir/jetty-0_0_0_0-44967-hbase-server-2_4_18-SNAPSHOT_jar-_-any-740395718468964636/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:11,717 INFO [Listener at localhost.localdomain/36325] server.AbstractConnector(333): Started ServerConnector@2890b019{HTTP/1.1, (http/1.1)}{0.0.0.0:44967} 2023-07-21 15:17:11,717 INFO [Listener at localhost.localdomain/36325] server.Server(415): Started @47663ms 2023-07-21 15:17:11,720 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(951): ClusterId : 17c57e0a-bd67-4811-977b-8bee167bbadf 2023-07-21 15:17:11,720 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:17:11,722 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:17:11,722 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:17:11,723 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:17:11,725 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ReadOnlyZKClient(139): Connect 0x249d596a to 127.0.0.1:57770 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:17:11,730 DEBUG [RS:3;jenkins-hbase17:45125] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a5aa389, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:17:11,731 DEBUG [RS:3;jenkins-hbase17:45125] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e65321a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:11,738 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:45125 2023-07-21 15:17:11,738 INFO [RS:3;jenkins-hbase17:45125] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:17:11,738 INFO [RS:3;jenkins-hbase17:45125] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:17:11,739 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:17:11,739 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,32893,1689952629060 with isa=jenkins-hbase17.apache.org/136.243.18.41:45125, startcode=1689952631583 2023-07-21 15:17:11,739 DEBUG [RS:3;jenkins-hbase17:45125] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:17:11,745 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45325, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:17:11,746 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32893] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,746 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:17:11,746 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea 2023-07-21 15:17:11,746 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35877 2023-07-21 15:17:11,746 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33545 2023-07-21 15:17:11,749 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:11,749 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:11,749 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:11,749 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,749 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:11,750 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,45125,1689952631583] 2023-07-21 15:17:11,750 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,750 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:17:11,750 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:11,750 WARN [RS:3;jenkins-hbase17:45125] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:17:11,750 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:11,750 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:11,750 INFO [RS:3;jenkins-hbase17:45125] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:17:11,751 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:11,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:11,751 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:17:11,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:11,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,754 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,757 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:11,757 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:11,758 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:11,758 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ZKUtil(162): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,759 DEBUG [RS:3;jenkins-hbase17:45125] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:17:11,759 INFO [RS:3;jenkins-hbase17:45125] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:17:11,760 INFO [RS:3;jenkins-hbase17:45125] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:17:11,760 INFO [RS:3;jenkins-hbase17:45125] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:17:11,760 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,761 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:17:11,762 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,763 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,764 DEBUG [RS:3;jenkins-hbase17:45125] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:17:11,766 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,766 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,766 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,776 INFO [RS:3;jenkins-hbase17:45125] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:17:11,776 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,45125,1689952631583-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:17:11,786 INFO [RS:3;jenkins-hbase17:45125] regionserver.Replication(203): jenkins-hbase17.apache.org,45125,1689952631583 started 2023-07-21 15:17:11,786 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,45125,1689952631583, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:45125, sessionid=0x10188742c4f000b 2023-07-21 15:17:11,786 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:17:11,786 DEBUG [RS:3;jenkins-hbase17:45125] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,786 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45125,1689952631583' 2023-07-21 15:17:11,786 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:17:11,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,45125,1689952631583' 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:17:11,787 DEBUG [RS:3;jenkins-hbase17:45125] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:17:11,788 DEBUG [RS:3;jenkins-hbase17:45125] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:17:11,788 INFO [RS:3;jenkins-hbase17:45125] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:17:11,788 INFO [RS:3;jenkins-hbase17:45125] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:17:11,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:11,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:11,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:11,792 DEBUG [hconnection-0x369ca486-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:17:11,796 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:17:11,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:11,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:11,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:36288 deadline: 1689953831806, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:11,807 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:11,808 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:11,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,810 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:11,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:11,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:11,868 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=554 (was 512) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@41bec2fb sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33679 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1070353326-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:42415 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1047753321_17 at /127.0.0.1:53834 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1070353326-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@59e85f49 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x1d7973b4-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data3/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:42415 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-5-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 33679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp604852410-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1047753321_17 at /127.0.0.1:33674 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: M:0;jenkins-hbase17:32893 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x2ff349c0-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 723036012@qtp-1386637619-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39343 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2313 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1099451659@qtp-1675783152-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp102877120-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@485aad49[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952629920 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 35877 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:35877 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp604852410-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43503 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase17:42481-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:54260 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea-prefix:jenkins-hbase17.apache.org,38059,1689952629216 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5fdada10[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea-prefix:jenkins-hbase17.apache.org,44393,1689952629411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2319 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215491331-2596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7a97e446-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:35877 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp876403506-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3fb06810-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp102877120-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:53844 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1ce01d1d sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x6aff5dfe sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x1d7973b4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1070353326-2222 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp102877120-2331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp604852410-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-11b5f49-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data2/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1070353326-2223-acceptor-0@1fa74394-ServerConnector@26781894{HTTP/1.1, (http/1.1)}{0.0.0.0:33545} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data5/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ff349c0-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:33682 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@637c4e68 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215491331-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp604852410-2283 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:42415 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6067d2a4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3e16adec-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:3;jenkins-hbase17:45125-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x369ca486-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp215491331-2589 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2253 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1047753321_17 at /127.0.0.1:54240 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1070353326-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x1d7973b4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-224074a2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2674f8ac java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@ba1a40d[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase17:38059-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@68826bad sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data1/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp102877120-2326 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60449@0x15e51e64-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1309000483_17 at /127.0.0.1:33658 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 857753307@qtp-1386637619-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data6/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68d4ffa-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43503 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7a97e446-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 33679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,36713,1689952623586 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:53842 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:1;jenkins-hbase17:44393-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@286ee521 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@a8d42d8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ForkJoinPool-5-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp604852410-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1299176372_17 at /127.0.0.1:54210 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43503 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2142615088-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43503 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:54248 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData-prefix:jenkins-hbase17.apache.org,32893,1689952629060 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@e0ab8aa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase17:44393 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp102877120-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215491331-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3e16adec sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3fb06810-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@6dc28455 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 853358662@qtp-95923142-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34471 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp604852410-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7851c3bf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp215491331-2590-acceptor-0@45869f74-ServerConnector@2890b019{HTTP/1.1, (http/1.1)}{0.0.0.0:44967} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:42415 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:42415 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-5-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: 1308624066@qtp-95923142-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60449@0x15e51e64-SendThread(127.0.0.1:60449) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1309000483_17 at /127.0.0.1:53828 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp604852410-2284-acceptor-0@1d74d3ec-ServerConnector@116f5b6e{HTTP/1.1, (http/1.1)}{0.0.0.0:33251} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:35877 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:42481 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea-prefix:jenkins-hbase17.apache.org,42481,1689952629569 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/37143-SendThread(127.0.0.1:60449) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/36325 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 609197734@qtp-1675783152-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35469 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data4/current/BP-1436379343-136.243.18.41-1689952628262 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 123493814@qtp-314811671-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38179 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 33679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3e16adec-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:35877 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:42415 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1b8eef82 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2254-acceptor-0@1c7c00df-ServerConnector@5dc82141{HTTP/1.1, (http/1.1)}{0.0.0.0:41585} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7851c3bf-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x249d596a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57770 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@4523e08d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1070353326-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:54258 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42481 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1981053234@qtp-314811671-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:42415 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-5-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp215491331-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7851c3bf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:44393Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37143-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x3fb06810 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7e45d0c5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1299176372_17 at /127.0.0.1:33618 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@ec7bfa9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1070353326-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:42415 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6e4abe0a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36325 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1299176372_17 at /127.0.0.1:53800 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea-prefix:jenkins-hbase17.apache.org,42481,1689952629569.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369ca486-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215491331-2595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp102877120-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:33580 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:45125 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(1973884627) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Client (1372221911) connection to localhost.localdomain/127.0.0.1:42415 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2142615088-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43503 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase17:45125Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1436379343-136.243.18.41-1689952628262:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp102877120-2328-acceptor-0@51885313-ServerConnector@8e62974{HTTP/1.1, (http/1.1)}{0.0.0.0:35533} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43503 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1309000483_17 at /127.0.0.1:54236 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:38059Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp876403506-2314-acceptor-0@17d20642-ServerConnector@e8ecf8b{HTTP/1.1, (http/1.1)}{0.0.0.0:37999} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp102877120-2327 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/589845269.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36325-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost.localdomain/36325.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1070353326-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_500974972_17 at /127.0.0.1:33688 [Receiving block BP-1436379343-136.243.18.41-1689952628262:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:42481Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215491331-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-5-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:35877 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952629910 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp2142615088-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1299176372_17 at /127.0.0.1:53782 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x249d596a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp604852410-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x6aff5dfe-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60449@0x15e51e64 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:38059 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 0 on default port 36325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x2ff349c0-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:57770): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,32893,1689952629060 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x7a97e446 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/744812566.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35877 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x249d596a-SendThread(127.0.0.1:57770) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57770@0x6aff5dfe-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=812 (was 805) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=626 (was 580) - SystemLoadAverage LEAK? -, ProcessCount=184 (was 184), AvailableMemoryMB=3079 (was 3228) 2023-07-21 15:17:11,915 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-21 15:17:11,916 INFO [RS:3;jenkins-hbase17:45125] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C45125%2C1689952631583, suffix=, logDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,45125,1689952631583, archiveDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs, maxLogs=32 2023-07-21 15:17:11,935 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK] 2023-07-21 15:17:11,936 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=554, OpenFileDescriptor=812, MaxFileDescriptor=60000, SystemLoadAverage=626, ProcessCount=184, AvailableMemoryMB=2999 2023-07-21 15:17:11,936 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-21 15:17:11,936 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 15:17:11,940 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK] 2023-07-21 15:17:11,940 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK] 2023-07-21 15:17:11,945 INFO [RS:3;jenkins-hbase17:45125] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/WALs/jenkins-hbase17.apache.org,45125,1689952631583/jenkins-hbase17.apache.org%2C45125%2C1689952631583.1689952631920 2023-07-21 15:17:11,945 DEBUG [RS:3;jenkins-hbase17:45125] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42217,DS-250ecc10-3c83-475b-9bed-82e20a5e50cd,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-b329c039-e3c5-445c-abcd-5566f4a4de1f,DISK], DatanodeInfoWithStorage[127.0.0.1:37369,DS-7b8be3eb-31a7-49e0-a101-bcdf20685c97,DISK]] 2023-07-21 15:17:11,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:11,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:11,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:11,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:11,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:11,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:11,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:11,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:11,956 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:11,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:11,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:11,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:11,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:11,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:11,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:11,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:36288 deadline: 1689953831965, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:11,966 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:11,968 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:11,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:11,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:11,969 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:11,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:11,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:11,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:11,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 15:17:11,974 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:11,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 15:17:11,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 15:17:11,976 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:11,976 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:11,976 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:11,978 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:17:11,979 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:11,980 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68 empty. 2023-07-21 15:17:11,981 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:11,981 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 15:17:11,993 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 15:17:11,994 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5c26b1f92e9fc507903463c546941e68, NAME => 't1,,1689952631971.5c26b1f92e9fc507903463c546941e68.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 5c26b1f92e9fc507903463c546941e68, disabling compactions & flushes 2023-07-21 15:17:12,019 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. after waiting 0 ms 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,019 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,019 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 5c26b1f92e9fc507903463c546941e68: 2023-07-21 15:17:12,022 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:17:12,023 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952632023"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952632023"}]},"ts":"1689952632023"} 2023-07-21 15:17:12,024 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:17:12,025 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:17:12,025 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952632025"}]},"ts":"1689952632025"} 2023-07-21 15:17:12,032 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:17:12,037 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:17:12,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, ASSIGN}] 2023-07-21 15:17:12,038 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 15:17:12,041 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, ASSIGN 2023-07-21 15:17:12,044 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38059,1689952629216; forceNewPlan=false, retain=false 2023-07-21 15:17:12,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 15:17:12,194 INFO [jenkins-hbase17:32893] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:17:12,198 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5c26b1f92e9fc507903463c546941e68, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:12,198 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952632198"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952632198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952632198"}]},"ts":"1689952632198"} 2023-07-21 15:17:12,200 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 5c26b1f92e9fc507903463c546941e68, server=jenkins-hbase17.apache.org,38059,1689952629216}] 2023-07-21 15:17:12,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 15:17:12,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c26b1f92e9fc507903463c546941e68, NAME => 't1,,1689952631971.5c26b1f92e9fc507903463c546941e68.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:17:12,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:17:12,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,357 INFO [StoreOpener-5c26b1f92e9fc507903463c546941e68-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,359 DEBUG [StoreOpener-5c26b1f92e9fc507903463c546941e68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68/cf1 2023-07-21 15:17:12,359 DEBUG [StoreOpener-5c26b1f92e9fc507903463c546941e68-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68/cf1 2023-07-21 15:17:12,359 INFO [StoreOpener-5c26b1f92e9fc507903463c546941e68-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c26b1f92e9fc507903463c546941e68 columnFamilyName cf1 2023-07-21 15:17:12,360 INFO [StoreOpener-5c26b1f92e9fc507903463c546941e68-1] regionserver.HStore(310): Store=5c26b1f92e9fc507903463c546941e68/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:17:12,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:12,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:17:12,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 5c26b1f92e9fc507903463c546941e68; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10357082240, jitterRate=-0.035421550273895264}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:17:12,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 5c26b1f92e9fc507903463c546941e68: 2023-07-21 15:17:12,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689952631971.5c26b1f92e9fc507903463c546941e68., pid=14, masterSystemTime=1689952632352 2023-07-21 15:17:12,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:12,374 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5c26b1f92e9fc507903463c546941e68, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:12,375 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952632374"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952632374"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952632374"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952632374"}]},"ts":"1689952632374"} 2023-07-21 15:17:12,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 15:17:12,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 5c26b1f92e9fc507903463c546941e68, server=jenkins-hbase17.apache.org,38059,1689952629216 in 183 msec 2023-07-21 15:17:12,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 15:17:12,389 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, ASSIGN in 348 msec 2023-07-21 15:17:12,395 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:17:12,395 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952632395"}]},"ts":"1689952632395"} 2023-07-21 15:17:12,396 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 15:17:12,398 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:17:12,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 427 msec 2023-07-21 15:17:12,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 15:17:12,580 INFO [Listener at localhost.localdomain/36325] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 15:17:12,580 DEBUG [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 15:17:12,580 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:12,582 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 15:17:12,583 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:12,583 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 15:17:12,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:17:12,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 15:17:12,587 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:17:12,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 15:17:12,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 136.243.18.41:36288 deadline: 1689952692584, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 15:17:12,590 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:12,592 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-21 15:17:12,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:12,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:12,692 INFO [Listener at localhost.localdomain/36325] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 15:17:12,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable t1 2023-07-21 15:17:12,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 15:17:12,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:17:12,696 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952632696"}]},"ts":"1689952632696"} 2023-07-21 15:17:12,697 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 15:17:12,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:17:12,940 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 15:17:12,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, UNASSIGN}] 2023-07-21 15:17:12,942 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, UNASSIGN 2023-07-21 15:17:12,942 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5c26b1f92e9fc507903463c546941e68, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:12,942 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952632942"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952632942"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952632942"}]},"ts":"1689952632942"} 2023-07-21 15:17:12,943 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 5c26b1f92e9fc507903463c546941e68, server=jenkins-hbase17.apache.org,38059,1689952629216}] 2023-07-21 15:17:12,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:17:13,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:13,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 5c26b1f92e9fc507903463c546941e68, disabling compactions & flushes 2023-07-21 15:17:13,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:13,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:13,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. after waiting 0 ms 2023-07-21 15:17:13,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:13,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/default/t1/5c26b1f92e9fc507903463c546941e68/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:17:13,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed t1,,1689952631971.5c26b1f92e9fc507903463c546941e68. 2023-07-21 15:17:13,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 5c26b1f92e9fc507903463c546941e68: 2023-07-21 15:17:13,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:13,107 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5c26b1f92e9fc507903463c546941e68, regionState=CLOSED 2023-07-21 15:17:13,107 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689952633107"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952633107"}]},"ts":"1689952633107"} 2023-07-21 15:17:13,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 15:17:13,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 5c26b1f92e9fc507903463c546941e68, server=jenkins-hbase17.apache.org,38059,1689952629216 in 166 msec 2023-07-21 15:17:13,113 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 15:17:13,113 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=5c26b1f92e9fc507903463c546941e68, UNASSIGN in 170 msec 2023-07-21 15:17:13,113 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952633113"}]},"ts":"1689952633113"} 2023-07-21 15:17:13,116 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 15:17:13,118 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 15:17:13,121 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 426 msec 2023-07-21 15:17:13,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:17:13,301 INFO [Listener at localhost.localdomain/36325] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 15:17:13,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete t1 2023-07-21 15:17:13,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 15:17:13,304 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 15:17:13,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 15:17:13,305 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 15:17:13,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,308 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:13,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 15:17:13,309 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68/cf1, FileablePath, hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68/recovered.edits] 2023-07-21 15:17:13,314 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68/recovered.edits/4.seqid to hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/archive/data/default/t1/5c26b1f92e9fc507903463c546941e68/recovered.edits/4.seqid 2023-07-21 15:17:13,315 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/.tmp/data/default/t1/5c26b1f92e9fc507903463c546941e68 2023-07-21 15:17:13,315 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 15:17:13,318 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 15:17:13,320 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 15:17:13,322 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 15:17:13,323 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 15:17:13,323 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 15:17:13,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689952631971.5c26b1f92e9fc507903463c546941e68.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952633323"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:13,326 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:17:13,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5c26b1f92e9fc507903463c546941e68, NAME => 't1,,1689952631971.5c26b1f92e9fc507903463c546941e68.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:17:13,326 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 15:17:13,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952633326"}]},"ts":"9223372036854775807"} 2023-07-21 15:17:13,328 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 15:17:13,333 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 15:17:13,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 32 msec 2023-07-21 15:17:13,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 15:17:13,409 INFO [Listener at localhost.localdomain/36325] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 15:17:13,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,422 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 107 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:36288 deadline: 1689953833430, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,431 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,435 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,436 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,462 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 554) - Thread LEAK? -, OpenFileDescriptor=833 (was 812) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=626 (was 626), ProcessCount=184 (was 184), AvailableMemoryMB=2987 (was 2999) 2023-07-21 15:17:13,462 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-21 15:17:13,487 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=626, ProcessCount=184, AvailableMemoryMB=2986 2023-07-21 15:17:13,487 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-21 15:17:13,487 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 15:17:13,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,500 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 135 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953833510, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,511 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,512 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,513 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 15:17:13,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:17:13,517 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 15:17:13,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 15:17:13,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 15:17:13,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,535 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 170 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953833546, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,547 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,549 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,550 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,571 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=626 (was 626), ProcessCount=184 (was 184), AvailableMemoryMB=2986 (was 2986) 2023-07-21 15:17:13,571 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 15:17:13,593 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=626, ProcessCount=184, AvailableMemoryMB=2985 2023-07-21 15:17:13,593 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 15:17:13,593 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 15:17:13,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,607 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 198 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953833619, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,620 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,622 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,624 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,636 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 226 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953833644, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,645 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,647 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,648 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,667 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=626 (was 626), ProcessCount=184 (was 184), AvailableMemoryMB=2983 (was 2985) 2023-07-21 15:17:13,668 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 15:17:13,689 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=626, ProcessCount=184, AvailableMemoryMB=2983 2023-07-21 15:17:13,689 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 15:17:13,689 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 15:17:13,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:13,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:13,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:13,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:13,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:13,703 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:13,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:13,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:13,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:13,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 254 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953833712, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:13,713 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:13,715 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:13,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,716 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:13,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:13,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:13,717 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 15:17:13,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_foo 2023-07-21 15:17:13,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 15:17:13,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:17:13,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 15:17:13,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:17:13,733 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:13,736 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-21 15:17:13,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:17:13,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_foo 2023-07-21 15:17:13,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:13,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 270 service: MasterService methodName: ExecMasterService size: 91 connection: 136.243.18.41:36288 deadline: 1689953833832, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 15:17:13,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$16(3053): Client=jenkins//136.243.18.41 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 15:17:13,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:17:13,851 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 15:17:13,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-21 15:17:13,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 15:17:13,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_anotherGroup 2023-07-21 15:17:13,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 15:17:13,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:13,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 15:17:13,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:13,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 15:17:13,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:13,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:13,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:13,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_foo 2023-07-21 15:17:13,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,973 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,976 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 15:17:13,978 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,979 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 15:17:13,979 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:17:13,980 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,983 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:17:13,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-21 15:17:14,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 15:17:14,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_foo 2023-07-21 15:17:14,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 15:17:14,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:14,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:14,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 15:17:14,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:14,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:14,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:14,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:14,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 292 service: MasterService methodName: CreateNamespace size: 70 connection: 136.243.18.41:36288 deadline: 1689952694087, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 15:17:14,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:14,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:14,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:14,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:14,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:14,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_anotherGroup 2023-07-21 15:17:14,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:14,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:14,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:17:14,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:14,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:17:14,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:17:14,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:17:14,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:17:14,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:17:14,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:17:14,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:14,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:17:14,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:17:14,117 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:17:14,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:17:14,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:17:14,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:17:14,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:17:14,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:17:14,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:14,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:14,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:32893] to rsgroup master 2023-07-21 15:17:14,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:17:14,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] ipc.CallRunner(144): callId: 322 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:36288 deadline: 1689953834127, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. 2023-07-21 15:17:14,128 WARN [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:32893 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:17:14,130 INFO [Listener at localhost.localdomain/36325] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:17:14,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:17:14,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:17:14,131 INFO [Listener at localhost.localdomain/36325] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:38059, jenkins-hbase17.apache.org:42481, jenkins-hbase17.apache.org:44393, jenkins-hbase17.apache.org:45125], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:17:14,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:17:14,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32893] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:17:14,152 INFO [Listener at localhost.localdomain/36325] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=569 (was 572), OpenFileDescriptor=832 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=626 (was 626), ProcessCount=187 (was 184) - ProcessCount LEAK? -, AvailableMemoryMB=2978 (was 2983) 2023-07-21 15:17:14,152 WARN [Listener at localhost.localdomain/36325] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-21 15:17:14,153 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 15:17:14,153 INFO [Listener at localhost.localdomain/36325] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3fb06810 to 127.0.0.1:57770 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] util.JVMClusterUtil(257): Found active master hash=2026695328, stopped=false 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:17:14,153 DEBUG [Listener at localhost.localdomain/36325] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:17:14,153 INFO [Listener at localhost.localdomain/36325] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:17:14,154 INFO [Listener at localhost.localdomain/36325] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:17:14,154 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:14,154 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:14,155 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:14,155 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:14,155 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:14,155 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:17:14,155 DEBUG [Listener at localhost.localdomain/36325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1d7973b4 to 127.0.0.1:57770 2023-07-21 15:17:14,155 DEBUG [Listener at localhost.localdomain/36325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38059,1689952629216' ***** 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,44393,1689952629411' ***** 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:14,155 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,42481,1689952629569' ***** 2023-07-21 15:17:14,155 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:14,155 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:14,155 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:14,156 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,45125,1689952631583' ***** 2023-07-21 15:17:14,156 INFO [Listener at localhost.localdomain/36325] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:17:14,156 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:14,157 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,161 INFO [RS:2;jenkins-hbase17:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6d88d7ed{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:14,161 INFO [RS:3;jenkins-hbase17:45125] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36d7d462{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:14,161 INFO [RS:0;jenkins-hbase17:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3e25f622{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:14,161 INFO [RS:1;jenkins-hbase17:44393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19c62d88{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:17:14,161 INFO [RS:0;jenkins-hbase17:38059] server.AbstractConnector(383): Stopped ServerConnector@5dc82141{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,161 INFO [RS:2;jenkins-hbase17:42481] server.AbstractConnector(383): Stopped ServerConnector@e8ecf8b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,161 INFO [RS:3;jenkins-hbase17:45125] server.AbstractConnector(383): Stopped ServerConnector@2890b019{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,162 INFO [RS:2;jenkins-hbase17:42481] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:14,161 INFO [RS:0;jenkins-hbase17:38059] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:14,162 INFO [RS:1;jenkins-hbase17:44393] server.AbstractConnector(383): Stopped ServerConnector@116f5b6e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,162 INFO [RS:3;jenkins-hbase17:45125] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:14,164 INFO [RS:1;jenkins-hbase17:44393] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:14,164 INFO [RS:0;jenkins-hbase17:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55005b15{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:14,163 INFO [RS:2;jenkins-hbase17:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e2bab9d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:14,165 INFO [RS:1;jenkins-hbase17:44393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@dff6508{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:14,165 INFO [RS:3;jenkins-hbase17:45125] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ff23c7c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:14,167 INFO [RS:1;jenkins-hbase17:44393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f111747{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:14,167 INFO [RS:2;jenkins-hbase17:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28bc3494{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:14,166 INFO [RS:0;jenkins-hbase17:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f8c1327{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:14,168 INFO [RS:3;jenkins-hbase17:45125] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@62566cb6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:14,168 INFO [RS:1;jenkins-hbase17:44393] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:14,169 INFO [RS:1;jenkins-hbase17:44393] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:14,169 INFO [RS:0;jenkins-hbase17:38059] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:14,169 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:14,169 INFO [RS:0;jenkins-hbase17:38059] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:14,169 INFO [RS:1;jenkins-hbase17:44393] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:14,169 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:14,169 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:14,169 INFO [RS:0;jenkins-hbase17:38059] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:14,169 DEBUG [RS:1;jenkins-hbase17:44393] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6aff5dfe to 127.0.0.1:57770 2023-07-21 15:17:14,169 INFO [RS:3;jenkins-hbase17:45125] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:14,169 DEBUG [RS:1;jenkins-hbase17:44393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,169 INFO [RS:3;jenkins-hbase17:45125] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:14,169 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(3305): Received CLOSE for 2662492254ae1355d69b669151350966 2023-07-21 15:17:14,169 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:14,169 DEBUG [RS:0;jenkins-hbase17:38059] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7851c3bf to 127.0.0.1:57770 2023-07-21 15:17:14,170 DEBUG [RS:0;jenkins-hbase17:38059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,170 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:17:14,170 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1478): Online Regions={2662492254ae1355d69b669151350966=hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966.} 2023-07-21 15:17:14,170 DEBUG [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1504): Waiting on 2662492254ae1355d69b669151350966 2023-07-21 15:17:14,169 INFO [RS:2;jenkins-hbase17:42481] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:17:14,170 INFO [RS:2;jenkins-hbase17:42481] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:17:14,170 INFO [RS:2;jenkins-hbase17:42481] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:14,170 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(3305): Received CLOSE for b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:14,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 2662492254ae1355d69b669151350966, disabling compactions & flushes 2023-07-21 15:17:14,169 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:14,169 INFO [RS:3;jenkins-hbase17:45125] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:17:14,169 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,44393,1689952629411; all regions closed. 2023-07-21 15:17:14,170 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:14,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:14,170 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:17:14,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:14,170 DEBUG [RS:3;jenkins-hbase17:45125] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x249d596a to 127.0.0.1:57770 2023-07-21 15:17:14,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. after waiting 0 ms 2023-07-21 15:17:14,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:14,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 2662492254ae1355d69b669151350966 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 15:17:14,171 DEBUG [RS:3;jenkins-hbase17:45125] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,171 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,45125,1689952631583; all regions closed. 2023-07-21 15:17:14,172 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:14,172 DEBUG [RS:2;jenkins-hbase17:42481] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a97e446 to 127.0.0.1:57770 2023-07-21 15:17:14,172 DEBUG [RS:2;jenkins-hbase17:42481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,172 INFO [RS:2;jenkins-hbase17:42481] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:14,172 INFO [RS:2;jenkins-hbase17:42481] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:14,172 INFO [RS:2;jenkins-hbase17:42481] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:14,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b0c31c07f229872023897d5df93447bd, disabling compactions & flushes 2023-07-21 15:17:14,172 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:17:14,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:14,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:14,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. after waiting 0 ms 2023-07-21 15:17:14,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:14,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing b0c31c07f229872023897d5df93447bd 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-21 15:17:14,173 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,174 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 15:17:14,174 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1478): Online Regions={b0c31c07f229872023897d5df93447bd=hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 15:17:14,174 DEBUG [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1504): Waiting on 1588230740, b0c31c07f229872023897d5df93447bd 2023-07-21 15:17:14,174 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:17:14,174 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:17:14,174 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:17:14,174 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:17:14,174 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:17:14,174 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-21 15:17:14,184 DEBUG [RS:1;jenkins-hbase17:44393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs 2023-07-21 15:17:14,185 DEBUG [RS:3;jenkins-hbase17:45125] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs 2023-07-21 15:17:14,185 INFO [RS:1;jenkins-hbase17:44393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C44393%2C1689952629411:(num 1689952630160) 2023-07-21 15:17:14,185 DEBUG [RS:1;jenkins-hbase17:44393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,185 INFO [RS:1;jenkins-hbase17:44393] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,185 INFO [RS:3;jenkins-hbase17:45125] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C45125%2C1689952631583:(num 1689952631920) 2023-07-21 15:17:14,185 DEBUG [RS:3;jenkins-hbase17:45125] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,185 INFO [RS:3;jenkins-hbase17:45125] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,186 INFO [RS:3;jenkins-hbase17:45125] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:14,186 INFO [RS:3;jenkins-hbase17:45125] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:14,186 INFO [RS:3;jenkins-hbase17:45125] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:14,186 INFO [RS:3;jenkins-hbase17:45125] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:14,186 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:14,188 INFO [RS:1;jenkins-hbase17:44393] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:14,188 INFO [RS:3;jenkins-hbase17:45125] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:45125 2023-07-21 15:17:14,189 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:14,189 INFO [RS:1;jenkins-hbase17:44393] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:14,189 INFO [RS:1;jenkins-hbase17:44393] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:14,189 INFO [RS:1;jenkins-hbase17:44393] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:14,191 INFO [RS:1;jenkins-hbase17:44393] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:44393 2023-07-21 15:17:14,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/.tmp/info/542bac6eb52d4bfa812857af7c7d97a5 2023-07-21 15:17:14,210 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/info/f063c0680d484c9aa285eb6cfd9e5d2b 2023-07-21 15:17:14,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/.tmp/m/15db7ac5cf864aaeb0f3e61318c9435f 2023-07-21 15:17:14,216 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f063c0680d484c9aa285eb6cfd9e5d2b 2023-07-21 15:17:14,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 542bac6eb52d4bfa812857af7c7d97a5 2023-07-21 15:17:14,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/.tmp/info/542bac6eb52d4bfa812857af7c7d97a5 as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/info/542bac6eb52d4bfa812857af7c7d97a5 2023-07-21 15:17:14,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 542bac6eb52d4bfa812857af7c7d97a5 2023-07-21 15:17:14,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/info/542bac6eb52d4bfa812857af7c7d97a5, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 15:17:14,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15db7ac5cf864aaeb0f3e61318c9435f 2023-07-21 15:17:14,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 2662492254ae1355d69b669151350966 in 56ms, sequenceid=9, compaction requested=false 2023-07-21 15:17:14,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/.tmp/m/15db7ac5cf864aaeb0f3e61318c9435f as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/m/15db7ac5cf864aaeb0f3e61318c9435f 2023-07-21 15:17:14,238 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15db7ac5cf864aaeb0f3e61318c9435f 2023-07-21 15:17:14,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/m/15db7ac5cf864aaeb0f3e61318c9435f, entries=12, sequenceid=29, filesize=5.5 K 2023-07-21 15:17:14,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for b0c31c07f229872023897d5df93447bd in 67ms, sequenceid=29, compaction requested=false 2023-07-21 15:17:14,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/namespace/2662492254ae1355d69b669151350966/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 15:17:14,243 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:14,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 2662492254ae1355d69b669151350966: 2023-07-21 15:17:14,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952630506.2662492254ae1355d69b669151350966. 2023-07-21 15:17:14,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/rsgroup/b0c31c07f229872023897d5df93447bd/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 15:17:14,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:14,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:14,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b0c31c07f229872023897d5df93447bd: 2023-07-21 15:17:14,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952630990.b0c31c07f229872023897d5df93447bd. 2023-07-21 15:17:14,256 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/rep_barrier/a4ce68bbc34c40d28f8d737660c997be 2023-07-21 15:17:14,263 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4ce68bbc34c40d28f8d737660c997be 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,273 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,273 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:14,273 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,44393,1689952629411] 2023-07-21 15:17:14,273 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,44393,1689952629411; numProcessing=1 2023-07-21 15:17:14,274 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,44393,1689952629411 already deleted, retry=false 2023-07-21 15:17:14,274 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,44393,1689952629411 expired; onlineServers=3 2023-07-21 15:17:14,274 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,45125,1689952631583] 2023-07-21 15:17:14,274 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,45125,1689952631583; numProcessing=2 2023-07-21 15:17:14,275 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,45125,1689952631583 already deleted, retry=false 2023-07-21 15:17:14,275 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,45125,1689952631583 expired; onlineServers=2 2023-07-21 15:17:14,272 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,45125,1689952631583 2023-07-21 15:17:14,273 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,273 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:14,276 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:14,276 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,276 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44393,1689952629411 2023-07-21 15:17:14,276 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/table/c8e511a936ee407fbcda255ff3260867 2023-07-21 15:17:14,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c8e511a936ee407fbcda255ff3260867 2023-07-21 15:17:14,282 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/info/f063c0680d484c9aa285eb6cfd9e5d2b as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/info/f063c0680d484c9aa285eb6cfd9e5d2b 2023-07-21 15:17:14,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f063c0680d484c9aa285eb6cfd9e5d2b 2023-07-21 15:17:14,288 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/info/f063c0680d484c9aa285eb6cfd9e5d2b, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 15:17:14,289 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/rep_barrier/a4ce68bbc34c40d28f8d737660c997be as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/rep_barrier/a4ce68bbc34c40d28f8d737660c997be 2023-07-21 15:17:14,294 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4ce68bbc34c40d28f8d737660c997be 2023-07-21 15:17:14,294 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/rep_barrier/a4ce68bbc34c40d28f8d737660c997be, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 15:17:14,295 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/.tmp/table/c8e511a936ee407fbcda255ff3260867 as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/table/c8e511a936ee407fbcda255ff3260867 2023-07-21 15:17:14,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c8e511a936ee407fbcda255ff3260867 2023-07-21 15:17:14,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/table/c8e511a936ee407fbcda255ff3260867, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 15:17:14,301 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 127ms, sequenceid=26, compaction requested=false 2023-07-21 15:17:14,310 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 15:17:14,310 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:17:14,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:14,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:17:14,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:17:14,370 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38059,1689952629216; all regions closed. 2023-07-21 15:17:14,374 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,42481,1689952629569; all regions closed. 2023-07-21 15:17:14,378 DEBUG [RS:0;jenkins-hbase17:38059] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs 2023-07-21 15:17:14,378 INFO [RS:0;jenkins-hbase17:38059] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38059%2C1689952629216:(num 1689952630165) 2023-07-21 15:17:14,378 DEBUG [RS:0;jenkins-hbase17:38059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,378 INFO [RS:0;jenkins-hbase17:38059] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,379 INFO [RS:0;jenkins-hbase17:38059] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:14,379 INFO [RS:0;jenkins-hbase17:38059] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:17:14,379 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:14,379 INFO [RS:0;jenkins-hbase17:38059] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:17:14,379 INFO [RS:0;jenkins-hbase17:38059] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:17:14,380 INFO [RS:0;jenkins-hbase17:38059] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38059 2023-07-21 15:17:14,381 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:14,381 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38059,1689952629216 2023-07-21 15:17:14,381 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,382 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38059,1689952629216] 2023-07-21 15:17:14,382 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38059,1689952629216; numProcessing=3 2023-07-21 15:17:14,382 DEBUG [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs 2023-07-21 15:17:14,382 INFO [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C42481%2C1689952629569.meta:.meta(num 1689952630452) 2023-07-21 15:17:14,383 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38059,1689952629216 already deleted, retry=false 2023-07-21 15:17:14,383 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38059,1689952629216 expired; onlineServers=1 2023-07-21 15:17:14,389 DEBUG [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/oldWALs 2023-07-21 15:17:14,389 INFO [RS:2;jenkins-hbase17:42481] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C42481%2C1689952629569:(num 1689952630165) 2023-07-21 15:17:14,389 DEBUG [RS:2;jenkins-hbase17:42481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,389 INFO [RS:2;jenkins-hbase17:42481] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:17:14,389 INFO [RS:2;jenkins-hbase17:42481] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:17:14,390 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:14,391 INFO [RS:2;jenkins-hbase17:42481] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:42481 2023-07-21 15:17:14,395 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,42481,1689952629569 2023-07-21 15:17:14,395 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:17:14,395 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,42481,1689952629569] 2023-07-21 15:17:14,395 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,42481,1689952629569; numProcessing=4 2023-07-21 15:17:14,396 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,42481,1689952629569 already deleted, retry=false 2023-07-21 15:17:14,396 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,42481,1689952629569 expired; onlineServers=0 2023-07-21 15:17:14,396 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,32893,1689952629060' ***** 2023-07-21 15:17:14,396 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:17:14,397 DEBUG [M:0;jenkins-hbase17:32893] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cf36161, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:17:14,397 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:17:14,400 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:17:14,401 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:17:14,401 INFO [M:0;jenkins-hbase17:32893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7ac1c0b6{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:17:14,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:17:14,401 INFO [M:0;jenkins-hbase17:32893] server.AbstractConnector(383): Stopped ServerConnector@26781894{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,401 INFO [M:0;jenkins-hbase17:32893] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:17:14,403 INFO [M:0;jenkins-hbase17:32893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@52c70bdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:17:14,403 INFO [M:0;jenkins-hbase17:32893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@511b45ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/hadoop.log.dir/,STOPPED} 2023-07-21 15:17:14,404 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,32893,1689952629060 2023-07-21 15:17:14,404 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,32893,1689952629060; all regions closed. 2023-07-21 15:17:14,404 DEBUG [M:0;jenkins-hbase17:32893] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:17:14,404 INFO [M:0;jenkins-hbase17:32893] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:17:14,405 INFO [M:0;jenkins-hbase17:32893] server.AbstractConnector(383): Stopped ServerConnector@8e62974{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:17:14,405 DEBUG [M:0;jenkins-hbase17:32893] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:17:14,405 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:17:14,405 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952629910] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952629910,5,FailOnTimeoutGroup] 2023-07-21 15:17:14,405 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952629920] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952629920,5,FailOnTimeoutGroup] 2023-07-21 15:17:14,405 DEBUG [M:0;jenkins-hbase17:32893] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:17:14,406 INFO [M:0;jenkins-hbase17:32893] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:17:14,406 INFO [M:0;jenkins-hbase17:32893] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:17:14,406 INFO [M:0;jenkins-hbase17:32893] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 15:17:14,406 DEBUG [M:0;jenkins-hbase17:32893] master.HMaster(1512): Stopping service threads 2023-07-21 15:17:14,406 INFO [M:0;jenkins-hbase17:32893] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:17:14,406 ERROR [M:0;jenkins-hbase17:32893] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 15:17:14,406 INFO [M:0;jenkins-hbase17:32893] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:17:14,406 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:17:14,408 DEBUG [M:0;jenkins-hbase17:32893] zookeeper.ZKUtil(398): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:17:14,408 WARN [M:0;jenkins-hbase17:32893] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:17:14,408 INFO [M:0;jenkins-hbase17:32893] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:17:14,409 INFO [M:0;jenkins-hbase17:32893] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:17:14,409 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:17:14,409 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:14,409 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:14,409 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:17:14,410 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:14,410 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.27 KB heapSize=90.73 KB 2023-07-21 15:17:14,456 INFO [M:0;jenkins-hbase17:32893] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.27 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52eafa2aec134f04bfcb91aaafcfae8a 2023-07-21 15:17:14,465 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52eafa2aec134f04bfcb91aaafcfae8a as hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52eafa2aec134f04bfcb91aaafcfae8a 2023-07-21 15:17:14,474 INFO [M:0;jenkins-hbase17:32893] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35877/user/jenkins/test-data/29997b85-4def-420d-2691-3c32c97820ea/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52eafa2aec134f04bfcb91aaafcfae8a, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 15:17:14,475 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegion(2948): Finished flush of dataSize ~76.27 KB/78102, heapSize ~90.71 KB/92888, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 64ms, sequenceid=175, compaction requested=false 2023-07-21 15:17:14,478 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:17:14,478 DEBUG [M:0;jenkins-hbase17:32893] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:17:14,487 INFO [M:0;jenkins-hbase17:32893] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:17:14,488 INFO [M:0;jenkins-hbase17:32893] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:32893 2023-07-21 15:17:14,488 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:17:14,489 DEBUG [M:0;jenkins-hbase17:32893] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,32893,1689952629060 already deleted, retry=false 2023-07-21 15:17:14,655 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,655 INFO [M:0;jenkins-hbase17:32893] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,32893,1689952629060; zookeeper connection closed. 2023-07-21 15:17:14,655 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): master:32893-0x10188742c4f0000, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,755 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,755 INFO [RS:2;jenkins-hbase17:42481] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,42481,1689952629569; zookeeper connection closed. 2023-07-21 15:17:14,755 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:42481-0x10188742c4f0003, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,756 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4da4e084] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4da4e084 2023-07-21 15:17:14,855 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,856 INFO [RS:0;jenkins-hbase17:38059] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38059,1689952629216; zookeeper connection closed. 2023-07-21 15:17:14,856 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x10188742c4f0001, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,856 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7edc5ff9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7edc5ff9 2023-07-21 15:17:14,956 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,956 INFO [RS:3;jenkins-hbase17:45125] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,45125,1689952631583; zookeeper connection closed. 2023-07-21 15:17:14,956 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:45125-0x10188742c4f000b, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:14,956 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@306884b6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@306884b6 2023-07-21 15:17:15,056 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:15,056 INFO [RS:1;jenkins-hbase17:44393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,44393,1689952629411; zookeeper connection closed. 2023-07-21 15:17:15,056 DEBUG [Listener at localhost.localdomain/36325-EventThread] zookeeper.ZKWatcher(600): regionserver:44393-0x10188742c4f0002, quorum=127.0.0.1:57770, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:17:15,057 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@228fb82d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@228fb82d 2023-07-21 15:17:15,057 INFO [Listener at localhost.localdomain/36325] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 15:17:15,057 WARN [Listener at localhost.localdomain/36325] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:15,060 INFO [Listener at localhost.localdomain/36325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:15,162 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:15,162 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1436379343-136.243.18.41-1689952628262 (Datanode Uuid cb321988-c571-4203-92b4-cb68a2e2888c) service to localhost.localdomain/127.0.0.1:35877 2023-07-21 15:17:15,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data5/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,164 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data6/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,165 WARN [Listener at localhost.localdomain/36325] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:15,168 INFO [Listener at localhost.localdomain/36325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:15,271 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:15,271 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1436379343-136.243.18.41-1689952628262 (Datanode Uuid 097a463f-6583-4c42-8916-6d4340735af1) service to localhost.localdomain/127.0.0.1:35877 2023-07-21 15:17:15,272 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data3/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,272 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data4/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,273 WARN [Listener at localhost.localdomain/36325] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:17:15,277 INFO [Listener at localhost.localdomain/36325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:17:15,380 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:17:15,380 WARN [BP-1436379343-136.243.18.41-1689952628262 heartbeating to localhost.localdomain/127.0.0.1:35877] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1436379343-136.243.18.41-1689952628262 (Datanode Uuid 27bc0075-28f6-490f-aa39-8668e23ce88c) service to localhost.localdomain/127.0.0.1:35877 2023-07-21 15:17:15,380 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data1/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,381 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d4d8472c-0174-ae46-3524-b0c94b3db5c0/cluster_871f1ca3-2577-c854-e1f3-e98356ef5dbd/dfs/data/data2/current/BP-1436379343-136.243.18.41-1689952628262] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:17:15,390 INFO [Listener at localhost.localdomain/36325] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 15:17:15,505 INFO [Listener at localhost.localdomain/36325] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 15:17:15,533 INFO [Listener at localhost.localdomain/36325] hbase.HBaseTestingUtility(1293): Minicluster is down